Moves the age human conversion logic to the client so that this value
can be used by API consumers programatically.
Rounding ensures recent snapshots are not showing as "older" than they
actually are, which is important for accuracy especially with
automirror snapshots and ages above half the next rounding value.
Allows shipping snapshots automatically to remote clusters on a cron,
identically to how autobackup handles local snapshot exports.
VMs are selected based on configured tags, and individual destination
clusters can be specified based on a colon-separated suffix to the
tag(s).
Automirror snapshots use the prefix "am" (analogous to "ab" for
autobackups) to differentiate them from normal "mr" mirrors.
1. Ensure the local connection is actually always present if it exists,
and stored in the store file.
2. Remove any invalid "local" store entries if present (i.e.
pvcapid.yaml entries from legacy versions).
3. Order the connection lists such that "local" is always first.
4. Improve pretty list output format such that all fields are wider if
needed
1. The destination state on an error was invalid; should be "stop".
2. If a lock was listed but removing it fails (because it was already
cleared somehow, this would error. In turn this would cause the VM to
not migrate and be left in an undefined state. Fix that when unlocking
is forced.
Using full paths broke the local schema generator, so convert these to
proper class instance methods and use them along with a new default +
settable override.
1. Move fence monitoring to its own thread rather than doing the listing
and triggering within the main keepalive thread.
2. Add a global lock key at /config/fence_lock and use this lock key to
prevent multiple nodes from trying to run fences simultaneously.
3. Run the fencing monitor for each node sequentially within the context
of the main fence monitoring thread, to ensure that fences of multiple
nodes happen sequentially rather than in parallel.
All of these should help to prevent any anomalies where one node can try
to fence multiple nodes at once without recourse.
Avoids various parts of the keepalive deadlocking waiting on data that
will never come when various internal processes fail. This should ensure
based on testing that the keepalive will always finish in <5 seconds.