Implements the storing of three VM metadata attributes:
1. Node limits - allows specifying a list of hosts on which the VM must
run. This limit influences the migration behaviour of VMs.
2. Per-VM node selectors - allows each VM to have its migration
autoselection method specified, to automatically allow different methods
per VM based on the administrator's preferences.
3. VM autorestart - allows a VM to be automatically restarted from a
stopped state, presumably due to a failure to find a target node (either
due to limits or otherwise) during a flush/fence recovery, on the next
node unflush/ready state of its home hypervisor. Useful mostly in
conjunction with limits to ensure that VMs which were shut down due to
there being no valid migration targets are started back up when their
node becomes ready again.
Includes the full client interaction with these metadata options,
including printing, as well as defining a new function to modify this
metadata. For the CLI it is set/modified either on `vm define` or via the
`vm meta` command. For the API it is set/modified either on a POST to
the `/vm` endpoint (during VM definition) or on POST to the `/vm/<vm>`
endpoint. For the API this replaces the previous reserved word for VM
creation from scratch as this will no longer be implemented in-daemon
(see #22).
Closes#52
This just trashed the formatting of the string if the network didn't
exist, despite several previous attempts to get this to align. Give up;
set the colour for the whole net list if any one network is invalid.
This is not as nice as per-network colouring but saves the hassle and
complexity.
This just seemed like more trouble that it was worth. Flush locks were
originally intended as a way to counteract the weird issues around
flushing that were mostly fixed by the code refactoring, so this will
help test if those issues are truly gone. If not, will look into a
cleaner solution that doesn't result in unchangeable states.
Store a basic list of RBD disks in Zookeeper for access by the node
subsystem to handle RBD locks. This avoids the need to implement complex
parsing logic inside the fencing configuration (or elsewhere).
Also handle a malformed XML content properly during VM define.
Previous to this, if once force-migrated a VM, the previous_node value
would be updated to the current node, which is likely never what an
administrator would want. Change this functionality so that the previous
node value is not changed, and update the documentation to reflect this.
This is the largest of the function files, and unlike the others is
cleanly split into four types. Reorganize the file and function
definitions around those types to make it easier to navigate, and do so
separately before refactoring for API.
Don't try to queue up a flush when there is already a flush lock; direct
the user to use --wait (which will actually wait before triggering the
new action), or try again later.
Prevents returning immediately to give the cluster some breathing
room before the admin can do other commands. Keep the write lock
as well to prevent other clients from attempting this as well.
Showing the static, total number of CPUs was pointless. Instead,
show the number of allocated vCPUs. To preserve space, no longer
show the host CPU count in the list.
Implements a locking mechanism to prevent clobbering of node
flushes. When a flush begins, a global cluster lock is placed
which is freed once the flush completes. While the lock is in place,
other flush events queue waiting for the lock to free before
proceeding.
Modifies the CLI output flow when the `--wait` option is specified.
First, if a lock exists when running the command, the message is
tweaked to indicate this, and the client will wait first for the
lock to free, and then for the flush as normal. Second, the wait
depends on the active lock rather than the domain_status for
consistency purposes.
Closes#32
Implements the ability for a client to watch almost-live domain
console logs from the hypervisors. It does this using a deque-based
"tail -f" mechanism (with a configurable buffer per-VM) that watches
the domain console logfile in the (configurable) directory every
half-second. It then stores the current buffer in Zookeeper when
changed, where a client can then request it, either as a static piece
of text in the `less` pager, or via a similar "tail -f" functionality
implemented using fixed line splitting and comparison to provide a
generally-seamless output.
Enabling this feature requires each guest VM to implement a Libvirt
serial log and write its (text) console to it, for example using the
default logging directory:
```
<serial type='pty'>
<log file='/var/log/libvirt/vmname.log' append='off'/>
<serial>
```
The append mode can be either on or off; on grows files unbounded,
off causes the log (and hence the PVC log data) to be truncated on
initial VM startup from offline. The administrator must choose how
they best want to handle this until Libvirt implements their own
clog-type logging format.