When using the "state", "node", or "tag" arguments to a VM list, add
support for a "negate" flag to look for all VMs *not in* the state,
node, or tag state.
Allows choosing different list and info functions based on the benchmark
version found. Currently only implements "legacy" version 0 with more to
be added.
1. Move to a time-based (60s) benchmark to avoid these taking an absurd
amount of time to show the same information.
2. Eliminate the 256k random benchmarks, since they don't really add
anything.
3. Add in a 4k single-queue benchmark as this might provide valuable
insight into latency.
4. Adjust the output to reflect the above changes.
While this does change the benchmarking, this should not invalidate any
existing benchmarks since most of the test suit is unchanged (especially
the most important 4M sequential and 4K random tests). It simply removes
an unused entry and adds a more helpful one. The time-based change
should not significantly affect the results either, just reduces the
total runtime for long-tests and increase the runtime for quick tests to
provide a better picture.
The previous implementation did not work with /dev/nvme devices or any
/dev/disk/by-* devices due to some logical failures in the partition
naming scheme, so fix these, and be explicit about what is supported in
the PVC CLI command output.
The 'echo | gdisk' implementation of partition creation also did not
work due to limitations of subprocess.run; instead, use sgdisk which
allows these commands to be written out explicitly and is included in
the same package as gdisk.
The default of 0.05 (5%) is likely ideal in the initial implementation,
but allow this to be set explicitly for maximum flexibility in
space-constrained or performance-critical use-cases.
Adds in three parts:
1. Create an API endpoint to create OSD DB volume groups on a device.
Passed through to the node via the same command pipeline as
creating/removing OSDs, and creates a volume group with a fixed name
(osd-db).
2. Adds API support for specifying whether or not to use this DB volume
group when creating a new OSD via the "ext_db" flag. Naming and sizing
is fixed for simplicity and based on Ceph recommendations (5% of OSD
size). The Zookeeper schema tracks the block device to use during
removal.
3. Adds CLI support for the new and modified API endpoints, as well as
displaying the block device and DB block device in the OSD list.
While I debated supporting adding a DB device to an existing OSD, in
practice this ended up being a very complex operation involving stopping
the OSD and setting some options, so this is not supported; this can be
specified during OSD creation only.
Closes#142
Adds a new API endpoint to support hot attach/detach of devices, and the
corresponding client-side logic to use this endpoint when doing VM
network/storage add/remove actions.
The live attach is now the default behaviour for these types of
additions and removals, and can be disabled if needed.
Closes#141
Before, "-y"/"--yes" only confirmed the reboot portion. Instead, modify
this to confirm both the diff portion and the restart portion, and add
separate flags to bypass one or the other independently, ensuring the
administrator has lots of flexibility. UNSAFE mode implies "-y" so both
would be auto-confirmed if that option is set.
Add an additional protected class, limit manipulation to one at a time,
and ensure future flexibility. Also makes display consistent with other
VM elements.
Matches the new node list output format with the additional header line,
as well as revamps some other aspects:
1. Adjusts the UUID to be under the name in the info output.
2. Removes the UUID from the list output to save space, because this
is generally not needed in day-to-day quick-list output.
3. Renames the "Node" header to "Current" to better reflect what
that column actually means and avoid conflicting with the parent
header.
Also adjusts the layout of the node list output to avoid excessively
long lines. Adds another header line with categories and spacing dashes
for easier visual parsing.
Also fixes up the Debian packaging such that this works how I would
want, with proper module installation while leaving everything else
untouched. Finally implements automatic installation and removal of the
BASH completion for the PVC command.
For the "vm modify", revamp the way confirmations are presented. Do the
edits/load, show changes, verify XML, then prompt to write and the
restart. The previous order didn't make much sense.
For any of these `--restart` triggered VM modifications, also alter how
the confirmation works. If the user declines the restart, do not abort;
instead, just set restart=False and continue with the modification.
Allows another way (beyond --yes) to avoid confirming "unsafe"
operations. While there is probably nearly zero usecase for this (at
least to any sane admin), it is provided to allow maximum flexibility.
Sets in the node daemon, returns via the API, and shows in the CLI,
information about the live VNC listen address and port for VNC-enabled
VMs.
Closes#115
Adds cluster backup (JSON dump) and restore functions for use in
disaster recovery.
Further, adds additional confirmation to the initialization (as well as
restore) endpoints to avoid accidental triggering, and also groups the
init, backup, and restore commands in the CLI into a new "task"
subsection.
Before, the default False was problematic and would reset consoles if
the template was otherwise modified. Instead switch the flags to be full
true/false flags, and on modify, adjust the default to be None so they
will not be changed.
Allow a VM to specify its migration type as a default choice. The valid
options are "default" (i.e. behave as now), "live" which forces a live
migration only, and "shutdown" which forces a shutdown migration only.
The new option is treated as a VM meta option and is set to default if
not found.
Adds a separate field to the node memory, "provisioned", which totals
the amount of memory provisioned to all VMs on the node, regardless of
state, and in contrast to "allocated" which only counts running VMs.
Allows for the detection of potential overprovisioned states when
factoring in non-running VMs.
Includes the supporting code to get this data, since the original
implementation of VM memory selection was dependent on the VM being
running and getting this from libvirt. Now, if the VM is not active, it
gets this from the domain XML instead.
Since these will almost always connect to an IP rather than a "real"
hostname, don't verify the SSL cert (if applicable). Also allow the
overriding of SSL verification via an environment variable.
As a consequence, to reduce spam, SSL warnings are disabled for urllib3.
Instead, we warn in the "Using cluster" output whenever verification is
disabled.
Don't try to chmod every time, instead only chmod when first creating
the file. Also allow loading the default permission from an envvar
rather than hardcoding it.
Provide textual explanations for the degraded status, including
specific node/VM/OSD issues as well as detailed Ceph health. "Single
pane of glass" mentality.
Makes this output a little more realistic and allows proper monitoring
of the Ceph cluster status (separate from the PVC status which is
tracking only OSD up/in state).
Allow the specifying of arbitrary provisioner script install() args on
the provisioner create CLI, either overriding or adding additional
per-VM arguments to those found in the profile. Reference example is
setting a "vm_fqdn" on a per-run basis.
Closes#100
Provides a CLI and API argument to force live migration, which triggers
a new VM state "migrate-live". The node daemon VMInstance during migrate
will read this flag from the state and, if enforced, will not trigger a
shutdown migration.
Closes#95
Instead of using group-based validation, which breaks the help context
for subcommands, use a decorator to validate the cluster status for each
command. The eager help option will then override this decorator for
help commands, while enforcing it for others.
Provide pretty status bars to indicate upload progress for tasks that
perform large file uploads to the API ('provisioner ova upload' and
'storage volume upload') so the administrator can gauge progress and
estimated time to completion.
Use a different wait method of querying the node status every
half-second during the transition, in order to wait on the transition to
complete if desired.
Closes#72
Add functions for uploading, listing, and removing OVA images to the API
and CLI interfaces. Includes improved parsing of the OVF and creation of
a system_template and profile for each OVA.
Also modifies some behaviour around profiles, making most components
option at creation to support both profile types (and incomplete
profiles generally).
Implementation part 2/3 - remaining: OVA VM creation
References #71
Rename "pvcd" to "pvcnoded", and "pvc-api" to "pvcapid" so names for the
daemons are fully consistent. Update the names of the configuration
files as well to match this new formatting.
References #79
Warn the administrator if there are active provisioning jobs while
adjusting the current primary node. This is the simplest, cleanest
solution to #69 without trying to implement any hacks or blocking
operations. The administrator can then decide to revert the action
if needed, or will at least know how many jobs are running/queued and
may need to be cancelled.
Move everything from under "storage ceph" to "storage" to simplify the
CLI; additional subclasses can be re-added at a future time if and when
additional storage classes are supported.
Implements a "maintenance mode" for PVC clusters. For now, the only
thing this mode does is disable node fencing while the state is true.
This allows the administrator to tell PVC that network connectivity,
etc. might be interrupted and to avoid fencing nodes.
Closes#70
Move all API calls to a new common function called call_api to
facilitate easier future changes. Use this function to implement API key
handling via request header value as well as integrate the request URI
generation and debug output handling.
Closes#65
Implements the storing of three VM metadata attributes:
1. Node limits - allows specifying a list of hosts on which the VM must
run. This limit influences the migration behaviour of VMs.
2. Per-VM node selectors - allows each VM to have its migration
autoselection method specified, to automatically allow different methods
per VM based on the administrator's preferences.
3. VM autorestart - allows a VM to be automatically restarted from a
stopped state, presumably due to a failure to find a target node (either
due to limits or otherwise) during a flush/fence recovery, on the next
node unflush/ready state of its home hypervisor. Useful mostly in
conjunction with limits to ensure that VMs which were shut down due to
there being no valid migration targets are started back up when their
node becomes ready again.
Includes the full client interaction with these metadata options,
including printing, as well as defining a new function to modify this
metadata. For the CLI it is set/modified either on `vm define` or via the
`vm meta` command. For the API it is set/modified either on a POST to
the `/vm` endpoint (during VM definition) or on POST to the `/vm/<vm>`
endpoint. For the API this replaces the previous reserved word for VM
creation from scratch as this will no longer be implemented in-daemon
(see #22).
Closes#52
This is likely not going to be used with the planned implementation of
the automatic provisioning daemon, which will be either an API client or
direct Python binding client.
Includes a simple implementation of a zookeeper "rename" facility,
allowing a key and all data to be replaced by a new key with a different
name but containing all the same child elements and data.
[2/2] Implements #44
Add the "storage" prefix to all Ceph-based commands in both the CLI and
the API. This partially abstracts the storage subsystem from the Ceph
tool specifically, should future storage subsystems be added or changed.
The name "ceph" is still used due to the non-abstracted components of
the Ceph management, e.g. referencing Ceph-specific concepts like OSDs
or pools.
This just seemed like more trouble that it was worth. Flush locks were
originally intended as a way to counteract the weird issues around
flushing that were mostly fixed by the code refactoring, so this will
help test if those issues are truly gone. If not, will look into a
cleaner solution that doesn't result in unchangeable states.
Previous to this, if once force-migrated a VM, the previous_node value
would be updated to the current node, which is likely never what an
administrator would want. Change this functionality so that the previous
node value is not changed, and update the documentation to reflect this.
Implements a locking mechanism to prevent clobbering of node
flushes. When a flush begins, a global cluster lock is placed
which is freed once the flush completes. While the lock is in place,
other flush events queue waiting for the lock to free before
proceeding.
Modifies the CLI output flow when the `--wait` option is specified.
First, if a lock exists when running the command, the message is
tweaked to indicate this, and the client will wait first for the
lock to free, and then for the flush as normal. Second, the wait
depends on the active lock rather than the domain_status for
consistency purposes.
Closes#32