This conditional will ensure that, the first time pvc.conf is installed
(or, subsequent times, until it stabilizes), the legacy configs will not
be removed. Then, on the next run in which pvc.conf does not change,
they will be removed.
This should provide a safety valve during a 0.9.83 update with the
update-pvc-daemons playbook: if the update succeeds, on the next run,
the legacy configs will be purged; otherwise, they will still be present
and can be used for fallback just in case.
This probably isn't needed, but just in case I'd rather be safe.
Needed because ceph-mgr seems to crash frequently under Debian 12 when
adding or removing OSDs. The default settings do not restart it
properly, so this override does.
Uses a much nicer CPU tuning configuration, leveraging systemd's
AllowedCPUs and CPUAffinity options within a set of slices (some
default, some custom).
Configuration is also greatly simplified versus the previous
implementation, simply asking for a number of CPUS for both the system
and OSDs, and calculating everything else that is required.
Also switches (back) to the v2 unified cgroup hierarchy by default as
required by the systemd AllowedCPUs directive.
This functionality simply did not work, with Libvirt continuing to dump
its processes into the root cset thus defeating the purpose entirely.
Just remove it, from some very initial testing it isn't worth the
headache.
Ensures that the pool default size/min size is set to something
reasonable for a single node (effective RAID-1) and replace teh default
CRUSH replicate_rule set for this situation with one choosing OSD
instead of host as the default.
1. Remove the obsolete pvc-vacuum script install.
2. Remove notifies when modifying configs; we do not want to restart the
daemons uncontrolled.
3. Add bootstrap check to package installs so they only happen on
bootstrap.
This ensures this part of the role, on re-runs, will *only* update
configs and not actually touch the running daemon. This makes it safe to
run before a oneshot/update-pvc-daemons.yml playbook run.
This was causing some confusing conflicts, so create a new fact called
"this_node" which is inventory_hostname.split('.')[0], i.e. the short
name, and use that everywhere instead of an FQDN or true inventory
hostname.