Compare commits

...

15 Commits

Author SHA1 Message Date
9dc5097dbc Bump version to 0.9.85 2023-12-10 01:00:33 -05:00
5776cb3a09 Remove Prometheus client dependencies
We don't actually use this (yet!) so remove the dependency for now.
2023-12-10 00:58:09 -05:00
53d632f283 Fix bug in example PVC Grafana dashboard 2023-12-10 00:50:05 -05:00
7bc0760b78 Add time to "starting keepalive" message
Matches the pvchealthd output and provides a useful message detail to
this otherwise contextless message.
2023-12-10 00:40:32 -05:00
9aee2a9075 Bump version to 0.9.84 2023-12-09 23:05:40 -05:00
8f0ae3e2dd Fix config file for database migrations 2023-12-09 22:51:54 -05:00
946d3eaf43 Add wait after stopping VM 2023-12-09 18:14:03 -05:00
1f6347d24b Add Prometheus monitoring examples 2023-12-09 17:42:51 -05:00
e8552b471b Require at least one FAULT_ID 2023-12-09 17:31:56 -05:00
fc443a323b Allow ack/delete of multiple faults at once 2023-12-09 17:28:13 -05:00
b0557edb76 Ensure entry in name is uppercase 2023-12-09 17:01:41 -05:00
47bd7bf2f5 Only run cluster-wide health checks on primary
Avoids multiple coordinators trying to write updated cluster-wide fault
events. Instead, they are now only written by the primary (or the
incoming primary if still in a transition).
2023-12-09 16:50:51 -05:00
b9fbfe2ed5 Improve fault ID format
Instead of using random hex characters from an md5sum, use a nice name
in all-caps similar to how Ceph does. This further helps prevent dupes
but also permits a changing health delta within a single event (which
would really only ever apply to plugin faults).
2023-12-09 16:48:14 -05:00
764e3e3722 Fix bug in fault header format 2023-12-09 16:47:56 -05:00
7e6d922877 Improve fault detail handling further
Since we already had a "details" field, simply move where it gets added
to the message later, in generate_fault, after the main message value
was used to generate the ID.
2023-12-09 16:13:36 -05:00
21 changed files with 2773 additions and 82 deletions

View File

@ -1 +1 @@
0.9.83
0.9.85

View File

@ -1,5 +1,21 @@
## PVC Changelog
###### [v0.9.85](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.85)
* [Packaging] Fixes a dependency bug introduced in 0.9.84
* [Node Daemon] Fixes an output bug during keepalives
* [Node Daemon] Fixes a bug in the example Prometheus Grafana dashboard
###### [v0.9.84](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.84)
**Breaking Changes:** This release features a major reconfiguration to how monitoring and reporting of the cluster health works. Node health plugins now report "faults", as do several other issues which were previously manually checked for in "cluster" daemon library for the "/status" endpoint, from within the Health daemon. These faults are persistent, and under each given identifier can be triggered once and subsequent triggers simply update the "last reported" time. An additional set of API endpoints and commands are added to manage these faults, either by "ack"(nowledging) them (keeping the alert around to be further updated but setting its health delta to 0%), or "delete"ing them (completely removing the fault unless it retriggers), both individually, to (from the CLI) multiple, or all. Cluster health reporting is now done based on these faults instead of anything else, and the default interval for health checks is reduced to 15 seconds to accomodate this. In addition to this, Promethius metrics have been added, along with an example Grafana dashboard, for the PVC cluster itself, as well as a proxy to the Ceph cluster metrics. This release also fixes some bugs in the VM provisioner that were introduced in 0.9.83; these fixes require a **reimport or reconfiguration of any provisioner scripts**; reference the updated examples for details.
* [All] Adds persistent fault reporting to clusters, replacing the old cluster health calculations.
* [API Daemon] Adds cluster-level Prometheus metric exporting as well as a Ceph Prometheus proxy to the API.
* [CLI Client] Improves formatting output of "pvc cluster status".
* [Node Daemon] Fixes several bugs and enhances the working of the psql health check plugin.
* [Worker Daemon] Fixes several bugs in the example provisioner scripts, and moves the libvirt_schema library into the daemon common libraries.
###### [v0.9.83](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.83)
**Breaking Changes:** This release features a breaking change for the daemon config. A new unified "pvc.conf" file is required for all daemons (and the CLI client for Autobackup and API-on-this-host functionality), which will be written by the "pvc" role in the PVC Ansible framework. Using the "update-pvc-daemons" oneshot playbook from PVC Ansible is **required** to update to this release, as it will ensure this file is written to the proper place before deploying the new package versions, and also ensures that the old entires are cleaned up afterwards. In addition, this release fully splits the node worker and health subsystems into discrete daemons ("pvcworkerd" and "pvchealthd") and packages ("pvc-daemon-worker" and "pvc-daemon-health") respectively. The "pvc-daemon-node" package also now depends on both packages, and the "pvc-daemon-api" package can now be reliably used outside of the PVC nodes themselves (for instance, in a VM) without any strange cross-dependency issues.

View File

@ -27,7 +27,7 @@ from distutils.util import strtobool as dustrtobool
import daemon_lib.config as cfg
# Daemon version
version = "0.9.83"
version = "0.9.85"
# API version
API_VERSION = 1.0

View File

@ -538,14 +538,15 @@ def cli_cluster_fault_list(limit, format_function):
name="ack",
short_help="Acknowledge a cluster fault.",
)
@click.argument("fault_id")
@click.argument("fault_id", nargs=-1, required=True)
@connection_req
def cli_cluster_fault_acknowledge(fault_id):
"""
Acknowledge the cluster fault FAULT_ID.
Acknowledge the cluster fault FAULT_ID; multiple FAULT_IDs may be specified.
"""
retcode, retdata = pvc.lib.faults.acknowledge(CLI_CONFIG, fault_id)
faults = list(fault_id)
retcode, retdata = pvc.lib.faults.acknowledge(CLI_CONFIG, faults)
finish(retcode, retdata)
@ -574,14 +575,15 @@ def cli_cluster_fault_acknowledge_all():
name="delete",
short_help="Delete a cluster fault.",
)
@click.argument("fault_id")
@click.argument("fault_id", nargs=-1, required=True)
@connection_req
def cli_cluster_fault_delete(fault_id):
"""
Delete the cluster fault FAULT_ID.
Delete the cluster fault FAULT_ID; multiple FAULT_IDs may be specified.
"""
retcode, retdata = pvc.lib.faults.delete(CLI_CONFIG, fault_id)
faults = list(fault_id)
retcode, retdata = pvc.lib.faults.delete(CLI_CONFIG, faults)
finish(retcode, retdata)

View File

@ -388,13 +388,13 @@ def cli_cluster_fault_list_format_short(CLI_CONFIG, fault_data):
fault_id_length + fault_status_length + fault_health_delta_length + 2
)
detail_header_length = (
fault_health_delta_length
fault_id_length
+ fault_health_delta_length
+ fault_status_length
+ fault_last_reported_length
+ fault_message_length
+ 3
- meta_header_length
+ 8
)
# Format the string (header)

View File

@ -45,20 +45,29 @@ def get_list(config, limit=None, sort_key="last_reported"):
return False, response.json().get("message", "")
def acknowledge(config, fault_id):
def acknowledge(config, faults):
"""
Acknowledge a PVC fault
Acknowledge one or more PVC faults
API endpoint: PUT /api/v1/faults/<fault_id>
API endpoint: PUT /api/v1/faults/<fault_id> for fault_id in faults
API arguments:
API schema: {json_message}
"""
status_codes = list()
bad_msgs = list()
for fault_id in faults:
response = call_api(config, "put", f"/faults/{fault_id}")
if response.status_code == 200:
return True, response.json().get("message", "")
status_codes.append(True)
else:
return False, response.json().get("message", "")
status_codes.append(False)
bad_msgs.append(response.json().get("message", ""))
if all(status_codes):
return True, f"Successfully acknowledged fault(s) {', '.join(faults)}"
else:
return False, ", ".join(bad_msgs)
def acknowledge_all(config):
@ -77,20 +86,29 @@ def acknowledge_all(config):
return False, response.json().get("message", "")
def delete(config, fault_id):
def delete(config, faults):
"""
Delete a PVC fault
Delete one or more PVC faults
API endpoint: DELETE /api/v1/faults/<fault_id>
API endpoint: DELETE /api/v1/faults/<fault_id> for fault_id in faults
API arguments:
API schema: {json_message}
"""
status_codes = list()
bad_msgs = list()
for fault_id in faults:
response = call_api(config, "delete", f"/faults/{fault_id}")
if response.status_code == 200:
return True, response.json().get("message", "")
status_codes.append(True)
else:
return False, response.json().get("message", "")
status_codes.append(False)
bad_msgs.append(response.json().get("message", ""))
if all(status_codes):
return True, f"Successfully deleted fault(s) {', '.join(faults)}"
else:
return False, ", ".join(bad_msgs)
def delete_all(config):

View File

@ -2,7 +2,7 @@ from setuptools import setup
setup(
name="pvc",
version="0.9.83",
version="0.9.85",
packages=["pvc.cli", "pvc.lib"],
install_requires=[
"Click",

View File

@ -20,66 +20,69 @@
###############################################################################
from datetime import datetime
from hashlib import md5
from re import sub
def generate_fault(
zkhandler, logger, fault_name, fault_time, fault_delta, fault_message
zkhandler,
logger,
fault_name,
fault_time,
fault_delta,
fault_message,
fault_details=None,
):
# Strip off any "extra" data from the message (things in brackets)
fault_core_message = sub(r"[\(\[].*?[\)\]]", "", fault_message).strip()
# Generate a fault ID from the fault_name, fault_delta, and fault_core_message
fault_str = f"{fault_name} {fault_delta} {fault_core_message}"
fault_id = str(md5(fault_str.encode("utf-8")).hexdigest())[:8]
# Strip the microseconds off of the fault time; we don't care about that precision
fault_time = str(fault_time).split(".")[0]
if fault_details is not None:
fault_message = f"{fault_message}: {fault_details}"
# If a fault already exists with this ID, just update the time
if not zkhandler.exists("base.faults"):
logger.out(
f"Skipping fault reporting for {fault_id} due to missing Zookeeper schemas",
f"Skipping fault reporting for {fault_name} due to missing Zookeeper schemas",
state="w",
)
return
existing_faults = zkhandler.children("base.faults")
if fault_id in existing_faults:
if fault_name in existing_faults:
logger.out(
f"Updating fault {fault_id}: {fault_message} @ {fault_time}", state="i"
f"Updating fault {fault_name}: {fault_message} @ {fault_time}", state="i"
)
else:
logger.out(
f"Generating fault {fault_id}: {fault_message} @ {fault_time}",
f"Generating fault {fault_name}: {fault_message} @ {fault_time}",
state="i",
)
if zkhandler.read("base.config.maintenance") == "true":
logger.out(
f"Skipping fault reporting for {fault_id} due to maintenance mode",
f"Skipping fault reporting for {fault_name} due to maintenance mode",
state="w",
)
return
if fault_id in existing_faults:
# Update an existing fault
if fault_name in existing_faults:
zkhandler.write(
[
(("faults.last_time", fault_id), fault_time),
(("faults.message", fault_id), fault_message),
(("faults.last_time", fault_name), fault_time),
(("faults.delta", fault_name), fault_delta),
(("faults.message", fault_name), fault_message),
]
)
# Otherwise, generate a new fault event
# Generate a new fault
else:
zkhandler.write(
[
(("faults.id", fault_id), ""),
(("faults.first_time", fault_id), fault_time),
(("faults.last_time", fault_id), fault_time),
(("faults.ack_time", fault_id), ""),
(("faults.status", fault_id), "new"),
(("faults.delta", fault_id), fault_delta),
(("faults.message", fault_id), fault_message),
(("faults.id", fault_name), ""),
(("faults.first_time", fault_name), fault_time),
(("faults.last_time", fault_name), fault_time),
(("faults.ack_time", fault_name), ""),
(("faults.status", fault_name), "new"),
(("faults.delta", fault_name), fault_delta),
(("faults.message", fault_name), fault_message),
]
)

20
debian/changelog vendored
View File

@ -1,3 +1,23 @@
pvc (0.9.85-0) unstable; urgency=high
* [Packaging] Fixes a dependency bug introduced in 0.9.84
* [Node Daemon] Fixes an output bug during keepalives
* [Node Daemon] Fixes a bug in the example Prometheus Grafana dashboard
-- Joshua M. Boniface <joshua@boniface.me> Sun, 10 Dec 2023 01:00:33 -0500
pvc (0.9.84-0) unstable; urgency=high
**Breaking Changes:** This release features a major reconfiguration to how monitoring and reporting of the cluster health works. Node health plugins now report "faults", as do several other issues which were previously manually checked for in "cluster" daemon library for the "/status" endpoint, from within the Health daemon. These faults are persistent, and under each given identifier can be triggered once and subsequent triggers simply update the "last reported" time. An additional set of API endpoints and commands are added to manage these faults, either by "ack"(nowledging) them (keeping the alert around to be further updated but setting its health delta to 0%), or "delete"ing them (completely removing the fault unless it retriggers), both individually, to (from the CLI) multiple, or all. Cluster health reporting is now done based on these faults instead of anything else, and the default interval for health checks is reduced to 15 seconds to accomodate this. In addition to this, Promethius metrics have been added, along with an example Grafana dashboard, for the PVC cluster itself, as well as a proxy to the Ceph cluster metrics. This release also fixes some bugs in the VM provisioner that were introduced in 0.9.83; these fixes require a **reimport or reconfiguration of any provisioner scripts**; reference the updated examples for details.
* [All] Adds persistent fault reporting to clusters, replacing the old cluster health calculations.
* [API Daemon] Adds cluster-level Prometheus metric exporting as well as a Ceph Prometheus proxy to the API.
* [CLI Client] Improves formatting output of "pvc cluster status".
* [Node Daemon] Fixes several bugs and enhances the working of the psql health check plugin.
* [Worker Daemon] Fixes several bugs in the example provisioner scripts, and moves the libvirt_schema library into the daemon common libraries.
-- Joshua M. Boniface <joshua@boniface.me> Sat, 09 Dec 2023 23:05:40 -0500
pvc (0.9.83-0) unstable; urgency=high
**Breaking Changes:** This release features a breaking change for the daemon config. A new unified "pvc.conf" file is required for all daemons (and the CLI client for Autobackup and API-on-this-host functionality), which will be written by the "pvc" role in the PVC Ansible framework. Using the "update-pvc-daemons" oneshot playbook from PVC Ansible is **required** to update to this release, as it will ensure this file is written to the proper place before deploying the new package versions, and also ensures that the old entires are cleaned up afterwards. In addition, this release fully splits the node worker and health subsystems into discrete daemons ("pvcworkerd" and "pvchealthd") and packages ("pvc-daemon-worker" and "pvc-daemon-health") respectively. The "pvc-daemon-node" package also now depends on both packages, and the "pvc-daemon-api" package can now be reliably used outside of the PVC nodes themselves (for instance, in a VM) without any strange cross-dependency issues.

8
debian/control vendored
View File

@ -8,7 +8,7 @@ X-Python3-Version: >= 3.7
Package: pvc-daemon-node
Architecture: all
Depends: systemd, pvc-daemon-common, pvc-daemon-health, pvc-daemon-worker, python3-kazoo, python3-psutil, python3-apscheduler, python3-libvirt, python3-psycopg2, python3-dnspython, python3-yaml, python3-distutils, python3-rados, python3-gevent, python3-prometheus-client, ipmitool, libvirt-daemon-system, arping, vlan, bridge-utils, dnsmasq, nftables, pdns-server, pdns-backend-pgsql
Depends: systemd, pvc-daemon-common, pvc-daemon-health, pvc-daemon-worker, python3-kazoo, python3-psutil, python3-apscheduler, python3-libvirt, python3-psycopg2, python3-dnspython, python3-yaml, python3-distutils, python3-rados, python3-gevent, ipmitool, libvirt-daemon-system, arping, vlan, bridge-utils, dnsmasq, nftables, pdns-server, pdns-backend-pgsql
Description: Parallel Virtual Cluster node daemon
A KVM/Zookeeper/Ceph-based VM and private cloud manager
.
@ -16,7 +16,7 @@ Description: Parallel Virtual Cluster node daemon
Package: pvc-daemon-health
Architecture: all
Depends: systemd, pvc-daemon-common, python3-kazoo, python3-psutil, python3-apscheduler, python3-yaml, python3-prometheus-client
Depends: systemd, pvc-daemon-common, python3-kazoo, python3-psutil, python3-apscheduler, python3-yaml
Description: Parallel Virtual Cluster health daemon
A KVM/Zookeeper/Ceph-based VM and private cloud manager
.
@ -24,7 +24,7 @@ Description: Parallel Virtual Cluster health daemon
Package: pvc-daemon-worker
Architecture: all
Depends: systemd, pvc-daemon-common, python3-kazoo, python3-celery, python3-redis, python3-yaml, python3-prometheus-client, python-celery-common, fio
Depends: systemd, pvc-daemon-common, python3-kazoo, python3-celery, python3-redis, python3-yaml, python-celery-common, fio
Description: Parallel Virtual Cluster worker daemon
A KVM/Zookeeper/Ceph-based VM and private cloud manager
.
@ -32,7 +32,7 @@ Description: Parallel Virtual Cluster worker daemon
Package: pvc-daemon-api
Architecture: all
Depends: systemd, pvc-daemon-common, python3-yaml, python3-flask, python3-flask-restful, python3-celery, python3-distutils, python3-redis, python3-lxml, python3-flask-migrate, python3-prometheus-client
Depends: systemd, pvc-daemon-common, python3-yaml, python3-flask, python3-flask-restful, python3-celery, python3-distutils, python3-redis, python3-lxml, python3-flask-migrate
Description: Parallel Virtual Cluster API daemon
A KVM/Zookeeper/Ceph-based VM and private cloud manager
.

View File

@ -6,7 +6,7 @@ VERSION="$( head -1 debian/changelog | awk -F'[()-]' '{ print $2 }' )"
pushd $( git rev-parse --show-toplevel ) &>/dev/null
pushd api-daemon &>/dev/null
export PVC_CONFIG_FILE="./pvcapid.sample.yaml"
export PVC_CONFIG_FILE="../pvc.sample.conf"
./pvcapid-manage_flask.py db migrate -m "PVC version ${VERSION}"
./pvcapid-manage_flask.py db upgrade
popd &>/dev/null

View File

@ -33,7 +33,7 @@ import os
import signal
# Daemon version
version = "0.9.83"
version = "0.9.85"
##########################################################

View File

@ -206,7 +206,7 @@ class MonitoringInstance(object):
{
"entry": node,
"check": self.zkhandler.read(("node.state.daemon", node)),
"details": "",
"details": None,
}
for node in self.zkhandler.children("base.node")
]
@ -219,7 +219,7 @@ class MonitoringInstance(object):
"check": loads(self.zkhandler.read(("osd.stats", osd))).get(
"in", 0
),
"details": "",
"details": None,
}
for osd in self.zkhandler.children("base.osd")
]
@ -228,7 +228,7 @@ class MonitoringInstance(object):
def get_ceph_health_entries():
ceph_health_entries = [
{
"entry": f"{value['severity']} {key}",
"entry": key,
"check": value["severity"],
"details": value["summary"]["message"],
}
@ -271,9 +271,9 @@ class MonitoringInstance(object):
op_str = "ok"
overprovisioned_memory = [
{
"entry": f"{current_memory_provisioned}MB > {available_node_memory}MB (N-1)",
"entry": "Cluster memory was overprovisioned",
"check": op_str,
"details": "",
"details": f"{current_memory_provisioned}MB > {available_node_memory}MB (N-1)",
}
]
return overprovisioned_memory
@ -281,40 +281,46 @@ class MonitoringInstance(object):
# This is a list of all possible faults (cluster error messages) and their corresponding details
self.cluster_faults_map = {
"dead_or_fenced_node": {
"name": "DEAD_NODE_{entry}",
"entries": get_node_daemon_states,
"conditions": ["dead", "fenced"],
"delta": 50,
"message": "Node {entry} was dead and/or fenced",
},
"ceph_osd_out": {
"name": "CEPH_OSD_OUT_{entry}",
"entries": get_osd_in_states,
"conditions": ["0"],
"delta": 50,
"message": "OSD {entry} was marked out",
},
"ceph_warn": {
"name": "CEPH_WARN_{entry}",
"entries": get_ceph_health_entries,
"conditions": ["HEALTH_WARN"],
"delta": 10,
"message": "{entry} reported by Ceph ({details})",
"message": "{entry} reported by Ceph cluster",
},
"ceph_err": {
"name": "CEPH_ERR_{entry}",
"entries": get_ceph_health_entries,
"conditions": ["HEALTH_ERR"],
"delta": 50,
"message": "{entry} reported by Ceph ({details})",
"message": "{entry} reported by Ceph cluster",
},
"vm_failed": {
"name": "VM_FAILED_{entry}",
"entries": get_vm_states,
"conditions": ["fail"],
"delta": 10,
"message": "VM {entry} was failed ({details})",
"message": "VM {entry} was failed",
},
"memory_overprovisioned": {
"name": "MEMORY_OVERPROVISIONED",
"entries": get_overprovisioned_memory,
"conditions": ["overprovisioned"],
"delta": 50,
"message": "Cluster memory was overprovisioned {entry}",
"message": "{entry}",
},
}
@ -507,7 +513,7 @@ class MonitoringInstance(object):
)
for fault_type in self.cluster_faults_map.keys():
fault_details = self.cluster_faults_map[fault_type]
fault_data = self.cluster_faults_map[fault_type]
if self.config["log_monitoring_details"] or self.config["debug"]:
self.logger.out(
@ -515,7 +521,7 @@ class MonitoringInstance(object):
state="t",
)
entries = fault_details["entries"]()
entries = fault_data["entries"]()
if self.config["debug"]:
self.logger.out(
@ -527,20 +533,20 @@ class MonitoringInstance(object):
entry = _entry["entry"]
check = _entry["check"]
details = _entry["details"]
for condition in fault_details["conditions"]:
for condition in fault_data["conditions"]:
if str(condition) == str(check):
fault_time = datetime.now()
fault_delta = fault_details["delta"]
fault_message = fault_details["message"].format(
entry=entry, details=details
)
fault_delta = fault_data["delta"]
fault_name = fault_data["name"].format(entry=entry.upper())
fault_message = fault_data["message"].format(entry=entry)
generate_fault(
self.zkhandler,
self.logger,
fault_type,
fault_name,
fault_time,
fault_delta,
fault_message,
fault_details=details,
)
self.faults += 1
@ -588,7 +594,7 @@ class MonitoringInstance(object):
# Generate a cluster fault if the plugin is in a suboptimal state
if result.health_delta > 0:
fault_type = f"plugin.{self.this_node.name}.{result.plugin_name}"
fault_name = f"NODE_PLUGIN_{result.plugin_name.upper()}_{self.this_node.name.upper()}"
fault_time = datetime.now()
# Map our check results to fault results
@ -603,10 +609,11 @@ class MonitoringInstance(object):
generate_fault(
self.zkhandler,
self.logger,
fault_type,
fault_name,
fault_time,
fault_delta,
fault_message,
fault_details=None,
)
self.faults += 1
@ -661,7 +668,7 @@ class MonitoringInstance(object):
self.run_plugins(coordinator_state=coordinator_state)
if coordinator_state in ["primary", "secondary", "takeover", "relinquish"]:
if coordinator_state in ["primary", "takeover"]:
self.run_faults(coordinator_state=coordinator_state)
runtime_end = datetime.now()

View File

@ -2,6 +2,14 @@
This directory contains several monitoring resources that can be used with various monitoring systems to track and alert on a PVC cluster system.
## Prometheus + Grafana
The included example Prometheus configuration and Grafana dashboard can be used to query the PVC API for Prometheus data and display it with a consistent dashboard.
Note that the default configuration here also includes Ceph cluster information; a Ceph dashboard can be found externally.
Note too that this does not include node export examples from individual PVC nodes; those must be set up separately.
## Munin
The included Munin plugins can be activated by linking to them from `/etc/munin/plugins/`. Two plugins are provided:

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
# Other configuration omitted
scrape_configs:
- job_name: "pvc_cluster"
metrics_path: /api/v1/metrics
scheme: "http"
file_sd_configs:
- files:
- 'targets-pvc_cluster.json'

View File

@ -0,0 +1,11 @@
[
{
"targets": [
"pvc.upstream.floating.address.tld:7370"
],
"labels": {
"cluster": "cluster1"
}
}
]

View File

@ -48,7 +48,7 @@ import re
import json
# Daemon version
version = "0.9.83"
version = "0.9.85"
##########################################################

View File

@ -701,7 +701,7 @@ def node_keepalive(logger, config, zkhandler, this_node):
runtime_start = datetime.now()
logger.out(
"Starting node keepalive run",
"Starting node keepalive run at {datetime.now()}",
state="t",
)

View File

@ -167,6 +167,7 @@ _pvc storage pool remove --yes testing
# Remove the VM
_pvc vm stop --yes testx
sleep 5
_pvc vm remove --yes testx
_pvc provisioner profile remove --yes test

View File

@ -44,7 +44,7 @@ from daemon_lib.vmbuilder import (
)
# Daemon version
version = "0.9.83"
version = "0.9.85"
config = cfg.get_configuration()