Compare commits

...

18 Commits

Author SHA1 Message Date
f1df1cfe93 Bump version to 0.9.55 2022-10-04 13:21:40 -04:00
5942aa50fc Avoid raise/handle deadlocks
Can cause log flooding in some edge cases and isn't really needed any
longer. Use a proper conditional followed by an actual error handler.
2022-10-03 14:04:12 -04:00
096bcdfd75 Try a literal eval first
This is a breakage between the older version of Celery (Deb10) and
newer. The hard removal broke Deb10 instances.

So try that first, and on failure, assume newer Celery format.
2022-09-06 10:34:50 -04:00
239c392892 Bump version to 0.9.54 2022-08-23 11:01:05 -04:00
172d0a86e4 Use proper SSLContext and enable TLSv1
It's bad, but sometimes you need to access the API from a very old
software version. So just enable it for now and clean it up later.
2022-08-23 10:58:47 -04:00
d8e57a26c5 Fix bad variable name 2022-08-18 11:37:57 -04:00
9b499b9f48 Bump version to 0.9.53 2022-08-12 17:47:11 -04:00
881550b610 Actually fix VM sorting
Due to the executor the previous attempt did not work.
2022-08-12 17:46:29 -04:00
2a21d48128 Bump version to 0.9.52 2022-08-12 11:09:25 -04:00
8d0f26ff7a Add additional kb_ values to OSD stats
Allows for easier parsing later to get e.g. % values and more details on
the used amounts.
2022-08-11 11:06:36 -04:00
bcabd7d079 Always sort VM list
Same justification as previous commit.
2022-08-09 12:05:40 -04:00
05a316cdd6 Ensure the node list is sorted
Otherwise the node entries could come back in an arbitrary order; since
this is an ordered list of dictionaries that might not be expected by
the API consumers, so ensure it's always sorted.
2022-08-09 12:03:49 -04:00
4b36753f27 Add reference to bootstrap in index 2022-08-03 20:22:16 -04:00
171f6ac9ed Add missing cluster_req for vm modify 2022-08-02 10:02:26 -04:00
645b525ad7 Bump version to 0.9.51 2022-07-25 23:25:41 -04:00
ec559aec0d Remove pvc-flush service
This service caused more headaches than it was worth, so remove it.

The original goal was to cleanly flush nodes on shutdown and unflush
them on startup, but this is tightly controlled by Ansible playbooks at
this point, and this is something best left to the Administrator and
their particular situation anyways.
2022-07-25 23:21:34 -04:00
71ffd5a191 Add confirmation to disable command 2022-07-21 16:43:37 -04:00
2739c27299 Remove faulty literal_eval 2022-07-18 13:35:15 -04:00
16 changed files with 133 additions and 51 deletions

View File

@ -1 +1 @@
0.9.50
0.9.55

View File

@ -1,5 +1,32 @@
## PVC Changelog
###### [v0.9.55](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.55)
* Fixes a problem with the literal eval handler in the provisioner (again)
* Fixes a potential log deadlock in Zookeeper-lost situations when doing keepalives
###### [v0.9.54](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.54)
[CLI Client] Fixes a bad variable reference from the previous change
[API Daemon] Enables TLSv1 with an SSLContext object for maximum compatibility
###### [v0.9.53](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.53)
* [API] Fixes sort order of VM list (for real this time)
###### [v0.9.52](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.52)
* [CLI] Fixes a bug with vm modify not requiring a cluster
* [Docs] Adds a reference to the bootstrap daemon
* [API] Adds sorting to node and VM lists for consistency
* [Node Daemon/API] Adds kb_ stats values for OSD stats
###### [v0.9.51](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.51)
* [CLI Client] Fixes a faulty literal_eval when viewing task status
* [CLI Client] Adds a confirmation flag to the vm disable command
* [Node Daemon] Removes the pvc-flush service
###### [v0.9.50](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.50)
* [Node Daemon/API/CLI] Adds free memory node selector

View File

@ -19,7 +19,7 @@ As a consequence of its features, PVC makes administrating very high-uptime VMs
PVC also features an optional, fully customizable VM provisioning framework, designed to automate and simplify VM deployments using custom provisioning profiles, scripts, and CloudInit userdata API support.
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client or WebUI.
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Installation can also be fully automated with a companion [cluster bootstrapping system](https://github.com/parallelvirtualcluster/pvc-bootstrap). Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client or WebUI.
Just give it physical servers, and it will run your VMs without you having to think about it, all in just an hour or two of setup time.

View File

@ -22,10 +22,12 @@
import os
import yaml
from ssl import SSLContext, TLSVersion
from distutils.util import strtobool as dustrtobool
# Daemon version
version = "0.9.50"
version = "0.9.55"
# API version
API_VERSION = 1.0
@ -123,7 +125,10 @@ def entrypoint():
import pvcapid.flaskapi as pvc_api # noqa: E402
if config["ssl_enabled"]:
context = (config["ssl_cert_file"], config["ssl_key_file"])
context = SSLContext()
context.minimum_version = TLSVersion.TLSv1
context.get_ca_certs()
context.load_cert_chain(config["ssl_cert_file"], keyfile=config["ssl_key_file"])
else:
context = None

View File

@ -23,10 +23,10 @@ from requests_toolbelt.multipart.encoder import (
MultipartEncoder,
MultipartEncoderMonitor,
)
from ast import literal_eval
import pvc.cli_lib.ansiprint as ansiprint
from pvc.cli_lib.common import UploadProgressBar, call_api
from ast import literal_eval
#
@ -793,10 +793,16 @@ def task_status(config, task_id=None, is_watching=False):
task["type"] = task_type
task["worker"] = task_host
task["id"] = task_job.get("id")
task_args = literal_eval(task_job.get("args"))
try:
task_args = literal_eval(task_job.get("args"))
except Exception:
task_args = task_job.get("args")
task["vm_name"] = task_args[0]
task["vm_profile"] = task_args[1]
task_kwargs = literal_eval(task_job.get("kwargs"))
try:
task_kwargs = literal_eval(task_job.get("kwargs"))
except Exception:
task_kwargs = task_job.get("kwargs")
task["vm_define"] = str(bool(task_kwargs["define_vm"]))
task["vm_start"] = str(bool(task_kwargs["start_vm"]))
task_data.append(task)

View File

@ -1023,6 +1023,7 @@ def vm_meta(
)
@click.argument("domain")
@click.argument("cfgfile", type=click.File(), default=None, required=False)
@cluster_req
def vm_modify(
domain,
cfgfile,
@ -1352,20 +1353,36 @@ def vm_stop(domain, confirm_flag):
@click.argument("domain")
@click.option(
"--force",
"force",
"force_flag",
is_flag=True,
default=False,
help="Forcibly stop the VM instead of waiting for shutdown.",
)
@click.option(
"-y",
"--yes",
"confirm_flag",
is_flag=True,
default=False,
help="Confirm the disable",
)
@cluster_req
def vm_disable(domain, force):
def vm_disable(domain, force_flag, confirm_flag):
"""
Shut down virtual machine DOMAIN and mark it as disabled. DOMAIN may be a UUID or name.
Disabled VMs will not be counted towards a degraded cluster health status, unlike stopped VMs. Use this option for a VM that will remain off for an extended period.
"""
retcode, retmsg = pvc_vm.vm_state(config, domain, "disable", force=force)
if not confirm_flag and not config["unsafe"]:
try:
click.confirm(
"Disable VM {}".format(domain), prompt_suffix="? ", abort=True
)
except Exception:
exit(0)
retcode, retmsg = pvc_vm.vm_state(config, domain, "disable", force=force_flag)
cleanup(retcode, retmsg)

View File

@ -2,7 +2,7 @@ from setuptools import setup
setup(
name="pvc",
version="0.9.50",
version="0.9.55",
packages=["pvc", "pvc.cli_lib"],
install_requires=[
"Click",

View File

@ -236,6 +236,7 @@ def get_list(
):
node_list = []
full_node_list = zkhandler.children("base.node")
full_node_list.sort()
if is_fuzzy and limit:
# Implicitly assume fuzzy limits

View File

@ -1193,6 +1193,7 @@ def get_list(zkhandler, node, state, tag, limit, is_fuzzy=True, negate=False):
return False, 'VM state "{}" is not valid.'.format(state)
full_vm_list = zkhandler.children("base.domain")
full_vm_list.sort()
# Set our limit to a sensible regex
if limit:
@ -1291,4 +1292,4 @@ def get_list(zkhandler, node, state, tag, limit, is_fuzzy=True, negate=False):
except Exception:
pass
return True, vm_data_list
return True, sorted(vm_data_list, key=lambda d: d["name"])

37
debian/changelog vendored
View File

@ -1,3 +1,40 @@
pvc (0.9.55-0) unstable; urgency=high
* Fixes a problem with the literal eval handler in the provisioner (again)
* Fixes a potential log deadlock in Zookeeper-lost situations when doing keepalives
-- Joshua M. Boniface <joshua@boniface.me> Tue, 04 Oct 2022 13:21:40 -0400
pvc (0.9.54-0) unstable; urgency=high
[CLI Client] Fixes a bad variable reference from the previous change
[API Daemon] Enables TLSv1 with an SSLContext object for maximum compatibility
-- Joshua M. Boniface <joshua@boniface.me> Tue, 23 Aug 2022 11:01:05 -0400
pvc (0.9.53-0) unstable; urgency=high
* [API] Fixes sort order of VM list (for real this time)
-- Joshua M. Boniface <joshua@boniface.me> Fri, 12 Aug 2022 17:47:11 -0400
pvc (0.9.52-0) unstable; urgency=high
* [CLI] Fixes a bug with vm modify not requiring a cluster
* [Docs] Adds a reference to the bootstrap daemon
* [API] Adds sorting to node and VM lists for consistency
* [Node Daemon/API] Adds kb_ stats values for OSD stats
-- Joshua M. Boniface <joshua@boniface.me> Fri, 12 Aug 2022 11:09:25 -0400
pvc (0.9.51-0) unstable; urgency=high
* [CLI Client] Fixes a faulty literal_eval when viewing task status
* [CLI Client] Adds a confirmation flag to the vm disable command
* [Node Daemon] Removes the pvc-flush service
-- Joshua M. Boniface <joshua@boniface.me> Mon, 25 Jul 2022 23:25:41 -0400
pvc (0.9.50-0) unstable; urgency=high
* [Node Daemon/API/CLI] Adds free memory node selector

View File

@ -3,5 +3,4 @@ node-daemon/pvcnoded.sample.yaml etc/pvc
node-daemon/pvcnoded usr/share/pvc
node-daemon/pvcnoded.service lib/systemd/system
node-daemon/pvc.target lib/systemd/system
node-daemon/pvc-flush.service lib/systemd/system
node-daemon/monitoring usr/share/pvc

View File

@ -7,11 +7,6 @@ systemctl daemon-reload
systemctl enable /lib/systemd/system/pvcnoded.service
systemctl enable /lib/systemd/system/pvc.target
# Inform administrator of the autoflush daemon if it is not enabled
if ! systemctl is-active --quiet pvc-flush.service; then
echo "NOTE: The PVC autoflush daemon (pvc-flush.service) is not enabled by default; enable it to perform automatic flush/unflush actions on host shutdown/startup."
fi
# Inform administrator of the service restart/startup not occurring automatically
if systemctl is-active --quiet pvcnoded.service; then
echo "NOTE: The PVC node daemon (pvcnoded.service) has not been restarted; this is up to the administrator."

View File

@ -18,7 +18,7 @@ As a consequence of its features, PVC makes administrating very high-uptime VMs
PVC also features an optional, fully customizable VM provisioning framework, designed to automate and simplify VM deployments using custom provisioning profiles, scripts, and CloudInit userdata API support.
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client or WebUI.
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Installation can also be fully automated with a companion [cluster bootstrapping system](https://github.com/parallelvirtualcluster/pvc-bootstrap). Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client or WebUI.
Just give it physical servers, and it will run your VMs without you having to think about it, all in just an hour or two of setup time.

View File

@ -1,20 +0,0 @@
# Parallel Virtual Cluster autoflush daemon
[Unit]
Description = Parallel Virtual Cluster autoflush daemon
After = pvcnoded.service pvcapid.service zookeeper.service libvirtd.service ssh.service ceph.target network-online.target
Wants = pvcnoded.service
PartOf = pvc.target
[Service]
Type = oneshot
RemainAfterExit = true
WorkingDirectory = /usr/share/pvc
TimeoutSec = 30min
ExecStartPre = /bin/sleep 30
ExecStart = /usr/bin/pvc -c local node unflush --wait
ExecStop = /usr/bin/pvc -c local node flush --wait
ExecStopPost = /bin/sleep 5
[Install]
WantedBy = pvc.target

View File

@ -48,7 +48,7 @@ import re
import json
# Daemon version
version = "0.9.50"
version = "0.9.55"
##########################################################

View File

@ -307,8 +307,14 @@ def collect_ceph_stats(logger, config, zkhandler, this_node, queue):
"var": osd["var"],
"pgs": osd["pgs"],
"kb": osd["kb"],
"kb_used": osd["kb_used"],
"kb_used_data": osd["kb_used_data"],
"kb_used_omap": osd["kb_used_omap"],
"kb_used_meta": osd["kb_used_meta"],
"kb_avail": osd["kb_avail"],
"weight": osd["crush_weight"],
"reweight": osd["reweight"],
"class": osd["device_class"],
}
}
)
@ -655,15 +661,19 @@ def node_keepalive(logger, config, zkhandler, this_node):
zkhandler.read("base.config.migration_target_selector")
!= config["migration_target_selector"]
):
raise
zkhandler.write(
[
(
"base.config.migration_target_selector",
config["migration_target_selector"],
)
]
)
except Exception:
zkhandler.write(
[
(
"base.config.migration_target_selector",
config["migration_target_selector"],
)
]
logger.out(
"Failed to set migration target selector in Zookeeper",
state="e",
prefix="main-thread",
)
# Set the upstream IP in Zookeeper for clients to read
@ -674,10 +684,14 @@ def node_keepalive(logger, config, zkhandler, this_node):
zkhandler.read("base.config.upstream_ip")
!= config["upstream_floating_ip"]
):
raise
zkhandler.write(
[("base.config.upstream_ip", config["upstream_floating_ip"])]
)
except Exception:
zkhandler.write(
[("base.config.upstream_ip", config["upstream_floating_ip"])]
logger.out(
"Failed to set upstream floating IP in Zookeeper",
state="e",
prefix="main-thread",
)
# Get past state and update if needed