Avoids calling unworkable functions when generating API docs etc. by
isolating them into a Celery startup function called by Daemon.py.
Also update to Celery 4+ settings format.
Updates all the example provisioner scripts to use the new functions
exposed by the VMBuilder class as an illustration of how best to use
them.
Also adds a wrapper fail() handler to ensure the cleanup of the script,
as well as the global cleanup, are run on an exception.
Full UUIDs were obnoxiously long, so switch to using just the first
8-character section of a UUID instead. Keeps the list nice and short,
makes them easier to copy, and is just generally nicer.
Could this cause uniqueness problems? Perhaps, but I don't see that
happening nearly frequently enough to matter.
Waiting for the daemons to stop took too much time on some nodes and
could throw off the lockstep. Instead, leverage background=True to run
the systemctl os_commands in the background (when they complete is
irrelevant), stop the Metadata API first, and don't delay during its
stop at all.
Tasks are no longer bound to the primary coordinator for state updates
due to using KeyDB and a proper shared queue and result backend, so this
warning is now obsolete and no longer required.
This would interrupt "--wait" commands on provisioner tasks, but we no
longer believe that this warrants a warning, as the affected user could
simply run "pvc cluster task" to validate or resume the watcher.
Removes the obsoleted "pvc provisioner status" command and replaces it
with a generalized "pvc cluster task" command to show all
currently-active or pending tasks on the cluster workers.
Move the create_vm and run_benchmark tasks to use the new Celery
subsystem, handlers, and wait command. Remove the obsolete, dedicated
API endpoints.
Standardize the CLI client and move the repeated handler code into a
separate common function.
Previously, we were assigning memalloc/memprov/vcpualloc during an
earlier phase using the main d_domain list. I'm not sure exactly why,
but this was throwing off stats after a fence. Instead, set these values
later on while parsing the actually-active VMs.
This is still needed due to the nature of the locks and freeing them on
startup, and to preserve lock=fail behaviour on VM startup.
Also fixes the fencing lock flush to directly use the client library
outside of Celery. I don't like this hack but it seems prudent until we
move fencing to the workers as well.