Implement VM metadata and use it
Implements the storing of three VM metadata attributes: 1. Node limits - allows specifying a list of hosts on which the VM must run. This limit influences the migration behaviour of VMs. 2. Per-VM node selectors - allows each VM to have its migration autoselection method specified, to automatically allow different methods per VM based on the administrator's preferences. 3. VM autorestart - allows a VM to be automatically restarted from a stopped state, presumably due to a failure to find a target node (either due to limits or otherwise) during a flush/fence recovery, on the next node unflush/ready state of its home hypervisor. Useful mostly in conjunction with limits to ensure that VMs which were shut down due to there being no valid migration targets are started back up when their node becomes ready again. Includes the full client interaction with these metadata options, including printing, as well as defining a new function to modify this metadata. For the CLI it is set/modified either on `vm define` or via the `vm meta` command. For the API it is set/modified either on a POST to the `/vm` endpoint (during VM definition) or on POST to the `/vm/<vm>` endpoint. For the API this replaces the previous reserved word for VM creation from scratch as this will no longer be implemented in-daemon (see #22). Closes #52
This commit is contained in:
@ -382,7 +382,7 @@ class NodeInstance(object):
|
||||
|
||||
self.logger.out('Selecting target to migrate VM "{}"'.format(dom_uuid), state='i')
|
||||
|
||||
target_node = common.findTargetHypervisor(self.zk_conn, 'mem', dom_uuid)
|
||||
target_node = common.findTargetHypervisor(self.zk_conn, self.config, dom_uuid)
|
||||
|
||||
# Don't replace the previous node if the VM is already migrated
|
||||
if zkhandler.readdata(self.zk_conn, '/domains/{}/lastnode'.format(dom_uuid)):
|
||||
@ -390,9 +390,10 @@ class NodeInstance(object):
|
||||
else:
|
||||
current_node = zkhandler.readdata(self.zk_conn, '/domains/{}/node'.format(dom_uuid))
|
||||
|
||||
if target_node == None:
|
||||
self.logger.out('Failed to find migration target for VM "{}"; shutting down'.format(dom_uuid), state='e')
|
||||
if target_node is None:
|
||||
self.logger.out('Failed to find migration target for VM "{}"; shutting down and setting autostart flag'.format(dom_uuid), state='e')
|
||||
zkhandler.writedata(self.zk_conn, { '/domains/{}/state'.format(dom_uuid): 'shutdown' })
|
||||
zkhandler.writedata(self.zk_conn, { '/domains/{}/node_autostart'.format(dom_uuid): 'True' })
|
||||
|
||||
# Wait for the VM to shut down
|
||||
while zkhandler.readdata(self.zk_conn, '/domains/{}/state'.format(dom_uuid)) != 'stop':
|
||||
@ -427,6 +428,19 @@ class NodeInstance(object):
|
||||
self.flush_stopper = False
|
||||
return
|
||||
|
||||
# Handle autostarts
|
||||
autostart = zkhandler.readdata(self.zk_conn, '/domains/{}/node_autostart'.format(dom_uuid))
|
||||
node = zkhandler.readdata(self.zk_conn, '/domains/{}/node'.format(dom_uuid))
|
||||
if autostart == 'True' and node == self.name:
|
||||
self.logger.out('Starting autostart VM "{}"'.format(dom_uuid), state='i')
|
||||
zkhandler.writedata(self.zk_conn, {
|
||||
'/domains/{}/state'.format(dom_uuid): 'start',
|
||||
'/domains/{}/node'.format(dom_uuid): self.name,
|
||||
'/domains/{}/lastnode'.format(dom_uuid): '',
|
||||
'/domains/{}/node_autostart'.format(dom_uuid): 'False'
|
||||
})
|
||||
continue
|
||||
|
||||
try:
|
||||
last_node = zkhandler.readdata(self.zk_conn, '/domains/{}/lastnode'.format(dom_uuid))
|
||||
except:
|
||||
|
Reference in New Issue
Block a user