Compare commits
43 Commits
35a5052e2b
...
master
Author | SHA1 | Date | |
---|---|---|---|
86cc7add2d | |||
f529f8fcd2 | |||
24119db4b1 | |||
a060b41791 | |||
febda81f7b | |||
34e1335fce | |||
640bdc0552 | |||
2097bf954b | |||
1038d5c576 | |||
d2b792c414 | |||
0907e1d7d2 | |||
c0acaafc61 | |||
40f30ce467 | |||
32457f2427 | |||
96c9643753 | |||
700d09d54f | |||
1dc4f98432 | |||
cfe40da677 | |||
9e0e2f0c76 | |||
ed0ab06d2c | |||
286d7aad44 | |||
3f0b0b2d7b | |||
6cfc75e321 | |||
cac2e2a2b8 | |||
7230ba6121 | |||
9cb675a60f | |||
090e39694c | |||
83118331a5 | |||
05bd7d1711 | |||
5db1438328 | |||
1c3c59b6f0 | |||
b4796ef4c9 | |||
d025a47d82 | |||
2681ccf577 | |||
247fc866a2 | |||
96ac9bcb75 | |||
884867989a | |||
08ba856288 | |||
00ac00ae2c | |||
fdae20c3c6 | |||
41d259101d | |||
5ceb589af7 | |||
9b589e5be1 |
76
README.md
76
README.md
@ -1,18 +1,46 @@
|
||||
<p align="center">
|
||||
<img alt="Logo banner" src="https://docs.parallelvirtualcluster.org/en/latest/images/pvc_logo_black.png"/>
|
||||
<br/><br/>
|
||||
<a href="https://www.parallelvirtualcluster.org"><img alt="Website" src="https://img.shields.io/badge/visit-website-blue"/></a>
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc/releases"><img alt="Latest Release" src="https://img.shields.io/github/release-pre/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://docs.parallelvirtualcluster.org/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc"><img alt="License" src="https://img.shields.io/github/license/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://github.com/psf/black"><img alt="Code style: Black" src="https://img.shields.io/badge/code%20style-black-000000.svg"/></a>
|
||||
</p>
|
||||
|
||||
## What is PVC?
|
||||
|
||||
PVC is a Linux KVM-based hyperconverged infrastructure (HCI) virtualization cluster solution that is fully Free Software, scalable, redundant, self-healing, self-managing, and designed for administrator simplicity. It is an alternative to other HCI solutions such as Ganeti, Harvester, Nutanix, and VMWare, as well as to other common virtualization stacks such as ProxMox and OpenStack.
|
||||
|
||||
PVC is a complete HCI solution, built from well-known and well-trusted Free Software tools, to assist an administrator in creating and managing a cluster of servers to run virtual machines, as well as self-managing several important aspects including storage failover, node failure and recovery, virtual machine failure and recovery, and network plumbing. It is designed to act consistently, reliably, and unobtrusively, letting the administrator concentrate on more important things.
|
||||
|
||||
PVC is highly scalable. From a minimum (production) node count of 3, up to 12 or more, and supporting many dozens of VMs, PVC scales along with your workload and requirements. Deploy a cluster once and grow it as your needs expand.
|
||||
|
||||
As a consequence of its features, PVC makes administrating very high-uptime VMs extremely easy, featuring VM live migration, built-in always-enabled shared storage with transparent multi-node replication, and consistent network plumbing throughout the cluster. Nodes can also be seamlessly removed from or added to service, with zero VM downtime, to facilitate maintenance, upgrades, or other work.
|
||||
|
||||
PVC also features an optional, fully customizable VM provisioning framework, designed to automate and simplify VM deployments using custom provisioning profiles, scripts, and CloudInit userdata API support.
|
||||
|
||||
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Installation can also be fully automated with a companion [cluster bootstrapping system](https://github.com/parallelvirtualcluster/pvc-bootstrap). Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client ~~or WebUI~~ (eventually).
|
||||
|
||||
Just give it physical servers, and it will run your VMs without you having to think about it, all in just an hour or two of setup time.
|
||||
|
||||
More information about PVC, its motivations, the hardware requirements, and setting up and managing a cluster [can be found over at our docs page](https://docs.parallelvirtualcluster.org).
|
||||
|
||||
# PVC Bootstrap System
|
||||
|
||||
The PVC bootstrap system provides a convenient way to deploy PVC clusters. Rather than manual node installation, this system provides a fully-automated deployment from node powering to cluster readiness, based on pre-configured values. It is useful if an administrator will deploy several PVC clusters or for repeated re-deployment for testing purposes.
|
||||
|
||||
## Setup
|
||||
# Setup
|
||||
|
||||
Setting up the PVC bootstrap system manually is very complicated, and has thus been automated with an installer script instead of providing a Debian or PIP package.
|
||||
|
||||
### Preparing to use the PVC Bootstrap system
|
||||
## Preparing to use the PVC Bootstrap system
|
||||
|
||||
1. Prepare a Git repository to store cluster configurations. This can be done automatically with the `create-local-repo.sh` script in the [PVC Ansible](https://github.com/parallelvirtualcluster/pvc-ansible) repository.
|
||||
|
||||
1. Create `group_vars` for each cluster you plan to bootstrap. Additionally, ensure you configure the `bootstrap.yml` file for each cluster with the relevant details of the hardware you will be using. This step can be repeated for each cluster in the future as new clusters are required, and the system will automatically pull changes to the local PVC repository once configured.
|
||||
|
||||
### Preparing a PVC Bootstrap host
|
||||
## Preparing a PVC Bootstrap host
|
||||
|
||||
1. The recommended OS for a PVC Bootstrap host is Debian GNU/Linux 10+. In terms of hardware, there are several supported options:
|
||||
|
||||
@ -28,7 +56,7 @@ Setting up the PVC bootstrap system manually is very complicated, and has thus b
|
||||
|
||||
1. Run the `./install-pvcbootstrapd.sh` script from the root of the repository to install the PVC Bootstrap system on the host. It will prompt for several configuration parameters. The final steps will take some time (up to 2 hours on a Raspberry Pi 4B) so be patient.
|
||||
|
||||
### Networking for Bootstrap
|
||||
## Networking for Bootstrap
|
||||
|
||||
When using the pvcbootstrapd system, a dedicated network is required to provide bootstrap DHCP and TFTP to the cluster. This network can either have a dedicated, upstream router that does not provide DHCP, or the network can be routed with network address translation (NAT) through the bootstrap host. By default, the installer will configure the latter automatically using a second NIC separate from the upstream NIC of the bootstrap host, or via a vLAN on top of the single NIC.
|
||||
|
||||
@ -48,7 +76,7 @@ Consider the following diagram for reference:
|
||||
|
||||

|
||||
|
||||
### Deploying a Cluster with PVC Bootstrap - Redfish
|
||||
## Deploying a Cluster with PVC Bootstrap - Redfish
|
||||
|
||||
Redfish is an industry-standard RESTful API for interfacing with the BMC (baseband management controller, or out-of-band network management system) on modern (post ~2015) servers from most vendors, including Dell iDRAC, HP iLO, Cisco CIMC, Lenovo XCC, and Supermicro X10 and newer BMCs. Redfish allows remote management, data collection, and configuration from the BMC in a standardized way across server vendors.
|
||||
|
||||
@ -64,7 +92,7 @@ The PVC Bootstrap system is designed to heavily leverage Redfish in its automati
|
||||
|
||||
1. Verify and power off the servers and put them into production; you may need to complete several post-install tasks (for instance setting the production BMC networking via `sudo ifup ipmi` on each node) before the cluster is completely finished.
|
||||
|
||||
### Deploying a Cluster with PVC Bootstrap - Non-Redfish
|
||||
## Deploying a Cluster with PVC Bootstrap - Non-Redfish
|
||||
|
||||
The PVC Bootstrap system can still handle nodes without Redfish support, for instance older servers or those from non-compliant vendors. There is however more manual setup in the process. The steps are thus:
|
||||
|
||||
@ -88,7 +116,7 @@ The PVC Bootstrap system can still handle nodes without Redfish support, for ins
|
||||
|
||||
1. Verify and power off the servers and put them into production; you may need to complete several post-install tasks (for instance setting the production BMC networking via `sudo ifup ipmi` on each node) before the cluster is completely finished.
|
||||
|
||||
#### `host-MAC.ipxe`
|
||||
### `host-MAC.ipxe`
|
||||
|
||||
```
|
||||
#1ipxe
|
||||
@ -106,7 +134,7 @@ The PVC Bootstrap system can still handle nodes without Redfish support, for ins
|
||||
set imgargs-host ARGUMENTS
|
||||
```
|
||||
|
||||
#### `host-MAC.preseed`
|
||||
### `host-MAC.preseed`
|
||||
|
||||
```
|
||||
# The name of this file is "host-123456abcdef.preseed", where "123456abcdef" is the MAC address of the
|
||||
@ -127,9 +155,9 @@ set imgargs-host ARGUMENTS
|
||||
# This file is thus not designed to be used by humans, and its values are seeded via options in
|
||||
# the cluster-local Ansible group_vars, though it can be used as a manual template if required.
|
||||
|
||||
###
|
||||
### General definitions/overrides
|
||||
###
|
||||
##
|
||||
## General definitions/overrides
|
||||
##
|
||||
# The Debian release to use (overrides the default)
|
||||
debrelease="bullseye"
|
||||
|
||||
@ -143,9 +171,9 @@ addpkglist="ca-certificates"
|
||||
filesystem="ext4"
|
||||
|
||||
|
||||
###
|
||||
### Per-host definitions (required)
|
||||
###
|
||||
##
|
||||
## Per-host definitions (required)
|
||||
##
|
||||
|
||||
# The hostname of the system (set per-run)
|
||||
target_hostname="hv1.example.tld"
|
||||
@ -153,13 +181,15 @@ target_hostname="hv1.example.tld"
|
||||
# The target system disk path; must be a single disk (mdadm/software RAID is not supported)
|
||||
# This will usually use a `detect` string. A "detect" string is a string in the form "detect:<NAME>:<HUMAN-SIZE>:<ID>".
|
||||
# Detect strings allow for automatic determination of Linux block device paths from known basic information
|
||||
# about disks by leveraging "lsscsi" on the target host. The "NAME" should be some descriptive identifier,
|
||||
# for instance the manufacturer (e.g. "INTEL"), the "HUMAN-SIZE" should be the labeled human-readable size
|
||||
# of the device (e.g. "480GB", "1.92TB"), and "ID" specifies the Nth 0-indexed device which matches the
|
||||
# NAME" and "HUMAN-SIZE" values (e.g. "2" would match the third device with the corresponding "NAME" and
|
||||
# "HUMAN-SIZE"). When matching against sizes, there is +/- 3% flexibility to account for base-1000 vs.
|
||||
# base-1024 differences and rounding errors. The "NAME" may contain whitespace but if so the entire detect
|
||||
# string should be quoted, and is case-insensitive.
|
||||
# about disks by leveraging "lsscsi"/"nvme" on the target host.
|
||||
# The "NAME" should be some descriptive identifier that would be part of the device's Model information, for instance
|
||||
# the manufacturer (e.g. "INTEL") or a similar unique string (e.g. "BOSS" for Dell BOSS cards).
|
||||
# The "HUMAN-SIZE" should be the labeled human-readable size of the device (e.g. "480GB", "1.92TB").
|
||||
# The "ID" specifies the Nth 0-indexed device which matches the NAME" and "HUMAN-SIZE" values (e.g. "2" would match the
|
||||
# third device with the corresponding "NAME" and "HUMAN-SIZE").
|
||||
# When matching against sizes, there is +/- 3% flexibility to account for base-1000 vs. base-1024 differences and
|
||||
# rounding errors.
|
||||
# The "NAME" may contain whitespace but if so the entire detect string should be quoted, and is case-insensitive.
|
||||
target_disk="detect:LOGICAL:146GB:0"
|
||||
|
||||
# SSH key fetch method (usually tftp)
|
||||
@ -186,8 +216,8 @@ target_deploy_user="deploy"
|
||||
pvcbootstrapd_checkin_uri="http://10.255.255.1:9999/checkin/host"
|
||||
```
|
||||
|
||||
## Bootstrap Process
|
||||
# Bootstrap Process
|
||||
|
||||
This diagram outlines the various states the nodes and clusters will be in throughout the setup process along with the individual steps for reference.
|
||||
This diagram outlines the various states the nodes and clusters will be in throughout the setup process along with the individual steps for reference. Which node starts characterizing first can be random, but is shown as `node1` for clarity. For non-Redflish installs, the first several steps must be completed manually as referenced above.
|
||||
|
||||

|
||||
|
@ -27,12 +27,8 @@ case "$( cat /etc/debian_version )" in
|
||||
10.*)
|
||||
CELERY_ARGS="worker --app pvcbootstrapd.flaskapi.celery --concurrency 99 --pool gevent --loglevel DEBUG"
|
||||
;;
|
||||
11.*)
|
||||
CELERY_ARGS="--app pvcbootstrapd.flaskapi.celery worker --concurrency 99 --pool gevent --loglevel DEBUG"
|
||||
;;
|
||||
*)
|
||||
echo "Invalid Debian version found!"
|
||||
exit 1
|
||||
CELERY_ARGS="--app pvcbootstrapd.flaskapi.celery worker --concurrency 99 --pool gevent --loglevel DEBUG"
|
||||
;;
|
||||
esac
|
||||
|
||||
|
@ -58,15 +58,24 @@ pvc:
|
||||
# Per-host TFTP path (almost always "/host" under "root_path"; must be writable)
|
||||
host_path: "/srv/tftp/pvc-installer/host"
|
||||
|
||||
# Debian repository configuration
|
||||
repo:
|
||||
# Mirror path; defaults to using the apt-cacher-ng instance located on the current machine
|
||||
# Replace "10.199.199.254" if you change "dhcp" -> "address" above
|
||||
mirror: http://10.199.199.254:3142/ftp.debian.org/debian
|
||||
|
||||
# Default Debian release for new clusters. Must be supported by PVC ("buster", "bullseye", "bookworm").
|
||||
release: bookworm
|
||||
|
||||
# PVC Ansible repository configuration
|
||||
# Note: If "path" does not exist, "remote" will be cloned to it via Git using SSH private key "keyfile".
|
||||
# Note: If "path" does not exist, "remote" will be cloned to it via Git using SSH private key "key_file".
|
||||
# Note: The VCS will be refreshed regularly via the API in response to webhooks.
|
||||
ansible:
|
||||
# Path to the VCS repository
|
||||
path: "/var/home/joshua/pvc"
|
||||
|
||||
# Path to the deploy key (if applicable) used to clone and pull the repository
|
||||
keyfile: "/var/home/joshua/id_ed25519.joshua.key"
|
||||
key_file: "/var/home/joshua/id_ed25519.joshua.key"
|
||||
|
||||
# Git remote URI for the repository
|
||||
remote: "ssh://git@git.bonifacelabs.ca:2222/bonifacelabs/pvc.git"
|
||||
@ -77,6 +86,9 @@ pvc:
|
||||
# Clusters configuration file
|
||||
clusters_file: "clusters.yml"
|
||||
|
||||
# Lock file to use for Git interaction
|
||||
lock_file: "/run/pvcbootstrapd.lock"
|
||||
|
||||
# Filenames of the various group_vars components of a cluster
|
||||
# Generally with pvc-ansible this will contain 2 files: "base.yml", and "pvc.yml"; refer to the
|
||||
# pvc-ansible documentation and examples for details on these files.
|
||||
|
@ -21,12 +21,16 @@ pvc:
|
||||
tftp:
|
||||
root_path: "ROOT_DIRECTORY/tftp"
|
||||
host_path: "ROOT_DIRECTORY/tftp/host"
|
||||
repo:
|
||||
mirror: http://BOOTSTRAP_ADDRESS:3142/UPSTREAM_MIRROR
|
||||
release: DEBIAN_RELEASE
|
||||
ansible:
|
||||
path: "ROOT_DIRECTORY/repo"
|
||||
keyfile: "ROOT_DIRECTORY/id_ed25519"
|
||||
key_file: "ROOT_DIRECTORY/id_ed25519"
|
||||
remote: "GIT_REMOTE"
|
||||
branch: "GIT_BRANCH"
|
||||
clusters_file: "clusters.yml"
|
||||
lock_file: "/run/pvcbootstrapd.lock"
|
||||
cspec_files:
|
||||
base: "base.yml"
|
||||
pvc: "pvc.yml"
|
||||
|
@ -121,6 +121,7 @@ def read_config():
|
||||
o_queue = o_base["queue"]
|
||||
o_dhcp = o_base["dhcp"]
|
||||
o_tftp = o_base["tftp"]
|
||||
o_repo = o_base["repo"]
|
||||
o_ansible = o_base["ansible"]
|
||||
o_notifications = o_base["notifications"]
|
||||
except KeyError as k:
|
||||
@ -178,8 +179,17 @@ def read_config():
|
||||
f"Missing second-level key '{key}' under 'tftp'"
|
||||
)
|
||||
|
||||
# Get the Repo configuration
|
||||
for key in ["mirror", "release"]:
|
||||
try:
|
||||
config[f"repo_{key}"] = o_repo[key]
|
||||
except Exception:
|
||||
raise MalformedConfigurationError(
|
||||
f"Missing second-level key '{key}' under 'repo'"
|
||||
)
|
||||
|
||||
# Get the Ansible configuration
|
||||
for key in ["path", "keyfile", "remote", "branch", "clusters_file"]:
|
||||
for key in ["path", "key_file", "remote", "branch", "clusters_file", "lock_file"]:
|
||||
try:
|
||||
config[f"ansible_{key}"] = o_ansible[key]
|
||||
except Exception:
|
||||
|
@ -54,7 +54,7 @@ def run_bootstrap(config, cspec, cluster, nodes):
|
||||
logger.info("Waiting 60s before starting Ansible bootstrap.")
|
||||
sleep(60)
|
||||
|
||||
logger.info("Starting Ansible bootstrap of cluster {cluster.name}")
|
||||
logger.info(f"Starting Ansible bootstrap of cluster {cluster.name}")
|
||||
notifications.send_webhook(config, "begin", f"Cluster {cluster.name}: Starting Ansible bootstrap")
|
||||
|
||||
# Run the Ansible playbooks
|
||||
@ -66,8 +66,8 @@ def run_bootstrap(config, cspec, cluster, nodes):
|
||||
limit=f"{cluster.name}",
|
||||
playbook=f"{config['ansible_path']}/pvc.yml",
|
||||
extravars={
|
||||
"ansible_ssh_private_key_file": config["ansible_keyfile"],
|
||||
"bootstrap": "yes",
|
||||
"ansible_ssh_private_key_file": config["ansible_key_file"],
|
||||
"do_bootstrap": "yes",
|
||||
},
|
||||
forks=len(nodes),
|
||||
verbosity=2,
|
||||
@ -76,7 +76,7 @@ def run_bootstrap(config, cspec, cluster, nodes):
|
||||
logger.info("{}: {}".format(r.status, r.rc))
|
||||
logger.info(r.stats)
|
||||
if r.rc == 0:
|
||||
git.commit_repository(config)
|
||||
git.commit_repository(config, f"Generated files for cluster '{cluster.name}'")
|
||||
git.push_repository(config)
|
||||
notifications.send_webhook(config, "success", f"Cluster {cluster.name}: Completed Ansible bootstrap")
|
||||
else:
|
||||
|
@ -67,7 +67,7 @@ def init_database(config):
|
||||
(id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
cluster INTEGER NOT NULL,
|
||||
state TEXT NOT NULL,
|
||||
name TEXT UNIQUE NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
nodeid INTEGER NOT NULL,
|
||||
bmc_macaddr TEXT NOT NULL,
|
||||
bmc_ipaddr TEXT NOT NULL,
|
||||
|
@ -22,6 +22,7 @@
|
||||
import os.path
|
||||
import git
|
||||
import yaml
|
||||
from filelock import FileLock
|
||||
|
||||
import pvcbootstrapd.lib.notifications as notifications
|
||||
|
||||
@ -36,7 +37,7 @@ def init_repository(config):
|
||||
Clone the Ansible git repository
|
||||
"""
|
||||
try:
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_keyfile']} -o StrictHostKeyChecking=no"
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_key_file']} -o StrictHostKeyChecking=no"
|
||||
if not os.path.exists(config["ansible_path"]):
|
||||
print(
|
||||
f"First run: cloning repository {config['ansible_remote']} branch {config['ansible_branch']} to {config['ansible_path']}"
|
||||
@ -60,61 +61,68 @@ def pull_repository(config):
|
||||
"""
|
||||
Pull (with rebase) the Ansible git repository
|
||||
"""
|
||||
logger.info(f"Updating local configuration repository {config['ansible_path']}")
|
||||
try:
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_keyfile']} -o StrictHostKeyChecking=no"
|
||||
g = git.cmd.Git(f"{config['ansible_path']}")
|
||||
g.pull(rebase=True, env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
g.submodule("update", "--init", env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to update Git repository")
|
||||
with FileLock(config['ansible_lock_file']):
|
||||
logger.info(f"Updating local configuration repository {config['ansible_path']}")
|
||||
try:
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_key_file']} -o StrictHostKeyChecking=no"
|
||||
g = git.cmd.Git(f"{config['ansible_path']}")
|
||||
logger.debug("Performing git pull")
|
||||
g.pull(rebase=True, env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
logger.debug("Performing git submodule update")
|
||||
g.submodule("update", "--init", env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
g.submodule("update", env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to update Git repository")
|
||||
logger.info("Completed repository synchonization")
|
||||
|
||||
|
||||
def commit_repository(config):
|
||||
def commit_repository(config, message="Generic commit"):
|
||||
"""
|
||||
Commit uncommitted changes to the Ansible git repository
|
||||
"""
|
||||
logger.info(
|
||||
f"Committing changes to local configuration repository {config['ansible_path']}"
|
||||
)
|
||||
|
||||
try:
|
||||
g = git.cmd.Git(f"{config['ansible_path']}")
|
||||
g.add("--all")
|
||||
commit_env = {
|
||||
"GIT_COMMITTER_NAME": "PVC Bootstrap",
|
||||
"GIT_COMMITTER_EMAIL": "git@pvcbootstrapd",
|
||||
}
|
||||
g.commit(
|
||||
"-m",
|
||||
"Automated commit from PVC Bootstrap Ansible subsystem",
|
||||
author="PVC Bootstrap <git@pvcbootstrapd>",
|
||||
env=commit_env,
|
||||
with FileLock(config['ansible_lock_file']):
|
||||
logger.info(
|
||||
f"Committing changes to local configuration repository {config['ansible_path']}"
|
||||
)
|
||||
notifications.send_webhook(config, "success", "Successfully committed to Git repository")
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to commit to Git repository")
|
||||
try:
|
||||
g = git.cmd.Git(f"{config['ansible_path']}")
|
||||
g.add("--all")
|
||||
commit_env = {
|
||||
"GIT_COMMITTER_NAME": "PVC Bootstrap",
|
||||
"GIT_COMMITTER_EMAIL": "git@pvcbootstrapd",
|
||||
}
|
||||
g.commit(
|
||||
"-m",
|
||||
"Automated commit from PVC Bootstrap Ansible subsystem",
|
||||
"-m",
|
||||
message,
|
||||
author="PVC Bootstrap <git@pvcbootstrapd>",
|
||||
env=commit_env,
|
||||
)
|
||||
notifications.send_webhook(config, "success", "Successfully committed to Git repository")
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to commit to Git repository")
|
||||
|
||||
|
||||
def push_repository(config):
|
||||
"""
|
||||
Push changes to the default remote
|
||||
"""
|
||||
logger.info(
|
||||
f"Pushing changes from local configuration repository {config['ansible_path']}"
|
||||
)
|
||||
|
||||
try:
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_keyfile']} -o StrictHostKeyChecking=no"
|
||||
g = git.Repo(f"{config['ansible_path']}")
|
||||
origin = g.remote(name="origin")
|
||||
origin.push(env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
notifications.send_webhook(config, "success", "Successfully pushed Git repository")
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to push Git repository")
|
||||
with FileLock(config['ansible_lock_file']):
|
||||
logger.info(
|
||||
f"Pushing changes from local configuration repository {config['ansible_path']}"
|
||||
)
|
||||
try:
|
||||
git_ssh_cmd = f"ssh -i {config['ansible_key_file']} -o StrictHostKeyChecking=no"
|
||||
g = git.Repo(f"{config['ansible_path']}")
|
||||
origin = g.remote(name="origin")
|
||||
origin.push(env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
|
||||
notifications.send_webhook(config, "success", "Successfully pushed Git repository")
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
notifications.send_webhook(config, "failure", "Failed to push Git repository")
|
||||
|
||||
|
||||
def load_cspec_yaml(config):
|
||||
|
@ -43,7 +43,7 @@ def run_paramiko(config, node_address):
|
||||
ssh_client.connect(
|
||||
hostname=node_address,
|
||||
username=config["deploy_username"],
|
||||
key_filename=config["ansible_keyfile"],
|
||||
key_filename=config["ansible_key_file"],
|
||||
)
|
||||
yield ssh_client
|
||||
ssh_client.close()
|
||||
@ -69,6 +69,7 @@ def run_hook_osddb(config, targets, args):
|
||||
stdin, stdout, stderr = c.exec_command(pvc_cmd_string)
|
||||
logger.debug(stdout.readlines())
|
||||
logger.debug(stderr.readlines())
|
||||
return stdout.channel.recv_exit_status()
|
||||
|
||||
|
||||
def run_hook_osd(config, targets, args):
|
||||
@ -83,13 +84,14 @@ def run_hook_osd(config, targets, args):
|
||||
weight = args.get("weight", 1)
|
||||
ext_db_flag = args.get("ext_db", False)
|
||||
ext_db_ratio = args.get("ext_db_ratio", 0.05)
|
||||
osd_count = args.get("osd_count", 1)
|
||||
|
||||
logger.info(f"Creating OSD on node {node_name} device {device} weight {weight}")
|
||||
|
||||
# Using a direct command on the target here is somewhat messy, but avoids many
|
||||
# complexities of determining a valid API listen address, etc.
|
||||
pvc_cmd_string = (
|
||||
f"pvc storage osd add --yes {node_name} {device} --weight {weight}"
|
||||
f"pvc storage osd add --yes {node_name} {device} --weight {weight} --osd-count {osd_count}"
|
||||
)
|
||||
if ext_db_flag:
|
||||
pvc_cmd_string = f"{pvc_cmd_string} --ext-db --ext-db-ratio {ext_db_ratio}"
|
||||
@ -98,6 +100,7 @@ def run_hook_osd(config, targets, args):
|
||||
stdin, stdout, stderr = c.exec_command(pvc_cmd_string)
|
||||
logger.debug(stdout.readlines())
|
||||
logger.debug(stderr.readlines())
|
||||
return stdout.channel.recv_exit_status()
|
||||
|
||||
|
||||
def run_hook_pool(config, targets, args):
|
||||
@ -127,7 +130,7 @@ def run_hook_pool(config, targets, args):
|
||||
logger.debug(stderr.readlines())
|
||||
|
||||
# This only runs once on whatever the first node is
|
||||
break
|
||||
return stdout.channel.recv_exit_status()
|
||||
|
||||
|
||||
def run_hook_network(config, targets, args):
|
||||
@ -191,7 +194,7 @@ def run_hook_network(config, targets, args):
|
||||
logger.debug(stderr.readlines())
|
||||
|
||||
# This only runs once on whatever the first node is
|
||||
break
|
||||
return stdout.channel.recv_exit_status()
|
||||
|
||||
|
||||
def run_hook_copy(config, targets, args):
|
||||
@ -217,11 +220,14 @@ def run_hook_copy(config, targets, args):
|
||||
tc.chmod(dfile, int(dmode, 8))
|
||||
tc.close()
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def run_hook_script(config, targets, args):
|
||||
"""
|
||||
Run a script on the targets
|
||||
"""
|
||||
return_status = 0
|
||||
for node in targets:
|
||||
node_name = node.name
|
||||
node_address = node.host_ipaddr
|
||||
@ -272,6 +278,10 @@ def run_hook_script(config, targets, args):
|
||||
stdin, stdout, stderr = c.exec_command(remote_command)
|
||||
logger.debug(stdout.readlines())
|
||||
logger.debug(stderr.readlines())
|
||||
if stdout.channel.recv_exit_status() != 0:
|
||||
return_status = stdout.channel.recv_exit_status()
|
||||
|
||||
return return_status
|
||||
|
||||
|
||||
def run_hook_webhook(config, targets, args):
|
||||
@ -345,7 +355,9 @@ def run_hooks(config, cspec, cluster, nodes):
|
||||
# Run the hook function
|
||||
try:
|
||||
notifications.send_webhook(config, "begin", f"Cluster {cluster.name}: Running hook task '{hook_name}'")
|
||||
hook_functions[hook_type](config, target_nodes, hook_args)
|
||||
retcode = hook_functions[hook_type](config, target_nodes, hook_args)
|
||||
if retcode > 0:
|
||||
raise Exception(f"Hook returned with code {retcode}")
|
||||
notifications.send_webhook(config, "success", f"Cluster {cluster.name}: Completed hook task '{hook_name}'")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running hook: {e}")
|
||||
|
@ -84,3 +84,16 @@ def set_boot_state(config, cspec, data, state):
|
||||
db.update_node_state(config, cspec_cluster, cspec_hostname, state)
|
||||
node = db.get_node(config, cspec_cluster, name=cspec_hostname)
|
||||
logger.debug(node)
|
||||
|
||||
|
||||
def set_completed(config, cspec, cluster):
|
||||
nodes = list()
|
||||
for bmc_macaddr in cspec["bootstrap"]:
|
||||
if cspec["bootstrap"][bmc_macaddr]["node"]["cluster"] == cluster:
|
||||
nodes.append(cspec["bootstrap"][bmc_macaddr])
|
||||
for node in nodes:
|
||||
cspec_cluster = node["node"]["cluster"]
|
||||
cspec_hostname = node["node"]["hostname"]
|
||||
db.update_node_state(config, cspec_cluster, cspec_hostname, "completed")
|
||||
node = db.get_node(config, cspec_cluster, name=cspec_hostname)
|
||||
logger.debug(node)
|
||||
|
@ -66,8 +66,8 @@ def add_preseed(config, cspec_node, host_macaddr, system_drive_target):
|
||||
|
||||
# We use the dhcp_address here to allow the listen_address to be 0.0.0.0
|
||||
rendered = template.render(
|
||||
debrelease=cspec_node.get("config", {}).get("release"),
|
||||
debmirror=cspec_node.get("config", {}).get("mirror"),
|
||||
debrelease=config.get("repo_release"),
|
||||
debmirror=config.get("repo_mirror"),
|
||||
addpkglist=add_packages,
|
||||
filesystem=cspec_node.get("config", {}).get("filesystem"),
|
||||
skip_blockcheck=False,
|
||||
|
@ -50,24 +50,38 @@ def dnsmasq_checkin(config, data):
|
||||
)
|
||||
cspec = git.load_cspec_yaml(config)
|
||||
is_in_bootstrap_map = True if data["macaddr"] in cspec["bootstrap"] else False
|
||||
if is_in_bootstrap_map:
|
||||
notifications.send_webhook(config, "info", f"New host checkin from MAC {data['macaddr']} as host {cspec['bootstrap'][data['macaddr']]['node']['fqdn']} in cluster {cspec['bootstrap'][data['macaddr']]['node']['cluster']}")
|
||||
if (
|
||||
cspec["bootstrap"][data["macaddr"]]["bmc"].get("redfish", None)
|
||||
is not None
|
||||
):
|
||||
if cspec["bootstrap"][data["macaddr"]]["bmc"]["redfish"]:
|
||||
is_redfish = True
|
||||
else:
|
||||
is_redfish = False
|
||||
try:
|
||||
if is_in_bootstrap_map:
|
||||
cspec_cluster = cspec["bootstrap"][data["macaddr"]]["node"]["cluster"]
|
||||
is_registered = True if data["macaddr"] in [x.bmc_macaddr for x in db.get_nodes_in_cluster(config, cspec_cluster)] else False
|
||||
else:
|
||||
is_redfish = redfish.check_redfish(config, data)
|
||||
is_registered = False
|
||||
except Exception:
|
||||
is_registered = False
|
||||
|
||||
logger.info(f"Is device '{data['macaddr']}' Redfish capable? {is_redfish}")
|
||||
if is_redfish:
|
||||
redfish.redfish_init(config, cspec, data)
|
||||
else:
|
||||
if not is_in_bootstrap_map:
|
||||
logger.warn(f"Device '{data['macaddr']}' not in bootstrap map; ignoring.")
|
||||
return
|
||||
|
||||
if is_registered:
|
||||
logger.info(f"Device '{data['macaddr']}' has already been bootstrapped; ignoring.")
|
||||
return
|
||||
|
||||
notifications.send_webhook(config, "info", f"New host checkin from MAC {data['macaddr']} as host {cspec['bootstrap'][data['macaddr']]['node']['fqdn']} in cluster {cspec['bootstrap'][data['macaddr']]['node']['cluster']}")
|
||||
if (
|
||||
cspec["bootstrap"][data["macaddr"]]["bmc"].get("redfish", None)
|
||||
is not None
|
||||
):
|
||||
if cspec["bootstrap"][data["macaddr"]]["bmc"]["redfish"]:
|
||||
is_redfish = True
|
||||
else:
|
||||
is_redfish = False
|
||||
else:
|
||||
is_redfish = redfish.check_redfish(config, data)
|
||||
|
||||
logger.info(f"Is device '{data['macaddr']}' Redfish capable? {is_redfish}")
|
||||
if is_redfish:
|
||||
redfish.redfish_init(config, cspec, data)
|
||||
|
||||
return
|
||||
|
||||
@ -140,11 +154,9 @@ def host_checkin(config, data):
|
||||
|
||||
hooks.run_hooks(config, cspec, cluster, ready_nodes)
|
||||
|
||||
target_state = "completed"
|
||||
for node in all_nodes:
|
||||
host.set_boot_state(config, cspec, data, target_state)
|
||||
host.set_completed(config, cspec, cspec_cluster)
|
||||
|
||||
# Hosts will now power down ready for real activation in production
|
||||
sleep(60)
|
||||
sleep(300)
|
||||
cluster = db.update_cluster_state(config, cspec_cluster, "completed")
|
||||
notifications.send_webhook(config, "completed", f"Cluster {cspec_cluster}: PVC bootstrap deployment completed")
|
||||
|
@ -196,16 +196,19 @@ class RedfishSession:
|
||||
logger.debug(f"POST payload: {payload}")
|
||||
|
||||
response = requests.post(url, data=payload, headers=self.headers, verify=False)
|
||||
logger.debug(f"Response: {response.status_code}")
|
||||
|
||||
if response.status_code in [200, 201, 204]:
|
||||
if response.status_code in [201, 204]:
|
||||
return {"response": "ok"}
|
||||
elif response.status_code in [200]:
|
||||
try:
|
||||
return response.json()
|
||||
except json.decoder.JSONDecodeError as e:
|
||||
except Exception:
|
||||
return {"json_err": e}
|
||||
else:
|
||||
try:
|
||||
rinfo = response.json()["error"]["@Message.ExtendedInfo"][0]
|
||||
except json.decoder.JSONDecodeError:
|
||||
except Exception:
|
||||
logger.debug(response)
|
||||
raise
|
||||
|
||||
@ -576,6 +579,7 @@ def set_power_state(session, system_root, redfish_vendor, state):
|
||||
"""
|
||||
Set the system power state to the desired state
|
||||
"""
|
||||
logger.debug(f"Calling set_power_state with {session}, {system_root}, {redfish_vendor}, {state}")
|
||||
state_values = {
|
||||
"default": {
|
||||
"on": "On",
|
||||
@ -715,8 +719,8 @@ def redfish_init(config, cspec, data):
|
||||
cspec_hostname = cspec_node["node"]["hostname"]
|
||||
cspec_fqdn = cspec_node["node"]["fqdn"]
|
||||
|
||||
logger.info("Waiting 60 seconds for system normalization")
|
||||
sleep(60)
|
||||
logger.info("Waiting 30 seconds for system normalization")
|
||||
sleep(30)
|
||||
|
||||
notifications.send_webhook(config, "begin", f"Cluster {cspec_cluster}: Beginning Redfish initialization of host {cspec_fqdn}")
|
||||
|
||||
@ -748,10 +752,11 @@ def redfish_init(config, cspec, data):
|
||||
return
|
||||
notifications.send_webhook(config, "success", f"Cluster {cspec_cluster}: Logged in to Redfish for host {cspec_fqdn} at {bmc_host}")
|
||||
|
||||
logger.info("Waiting 60 seconds for system normalization")
|
||||
sleep(60)
|
||||
logger.info("Waiting 30 seconds for system normalization")
|
||||
sleep(30)
|
||||
|
||||
logger.info("Characterizing node...")
|
||||
notifications.send_webhook(config, "begin", f"Cluster {cspec_cluster}: Beginning Redfish characterization of host {cspec_fqdn} at {bmc_host}")
|
||||
try:
|
||||
|
||||
# Get Refish bases
|
||||
@ -791,24 +796,29 @@ def redfish_init(config, cspec, data):
|
||||
try:
|
||||
ethernet_root = system_detail["EthernetInterfaces"]["@odata.id"].rstrip("/")
|
||||
ethernet_detail = session.get(ethernet_root)
|
||||
logger.debug(f"Found Ethernet detail: {ethernet_detail}")
|
||||
embedded_ethernet_detail_members = [e for e in ethernet_detail["Members"] if "Embedded" in e["@odata.id"]]
|
||||
embedded_ethernet_detail_members.sort(key = lambda k: k["@odata.id"])
|
||||
logger.debug(f"Found Ethernet members: {embedded_ethernet_detail_members}")
|
||||
first_interface_root = embedded_ethernet_detail_members[0]["@odata.id"].rstrip("/")
|
||||
first_interface_detail = session.get(first_interface_root)
|
||||
# Something went wrong, so fall back
|
||||
except KeyError:
|
||||
except Exception:
|
||||
first_interface_detail = dict()
|
||||
|
||||
logger.debug(f"First interface detail: {first_interface_detail}")
|
||||
logger.debug(f"HostCorrelation detail: {system_detail.get('HostCorrelation', {})}")
|
||||
# Try to get the MAC address directly from the interface detail (Redfish standard)
|
||||
logger.debug("Try to get the MAC address directly from the interface detail (Redfish standard)")
|
||||
if first_interface_detail.get("MACAddress") is not None:
|
||||
logger.debug("Try to get the MAC address directly from the interface detail (Redfish standard)")
|
||||
bootstrap_mac_address = first_interface_detail["MACAddress"].strip().lower()
|
||||
# Try to get the MAC address from the HostCorrelation->HostMACAddress (HP DL360x G8)
|
||||
elif len(system_detail.get("HostCorrelation", {}).get("HostMACAddress", [])) > 0:
|
||||
logger.debug("Try to get the MAC address from the HostCorrelation (HP iLO)")
|
||||
bootstrap_mac_address = (
|
||||
system_detail["HostCorrelation"]["HostMACAddress"][0].strip().lower()
|
||||
)
|
||||
# We can't find it, so use a dummy value
|
||||
# We can't find it, so abort
|
||||
else:
|
||||
logger.error("Could not find a valid MAC address for the bootstrap interface.")
|
||||
return
|
||||
@ -877,43 +887,43 @@ def redfish_init(config, cspec, data):
|
||||
return
|
||||
|
||||
# Adjust any BIOS settings
|
||||
logger.info("Adjusting BIOS settings...")
|
||||
try:
|
||||
bios_root = system_detail.get("Bios", {}).get("@odata.id")
|
||||
if bios_root is not None:
|
||||
bios_detail = session.get(bios_root)
|
||||
bios_attributes = list(bios_detail["Attributes"].keys())
|
||||
for setting, value in cspec_node["bmc"].get("bios_settings", {}).items():
|
||||
if setting not in bios_attributes:
|
||||
continue
|
||||
|
||||
payload = {"Attributes": {setting: value}}
|
||||
session.patch(f"{bios_root}/Settings", payload)
|
||||
except Exception as e:
|
||||
notifications.send_webhook(config, "failure", f"Cluster {cspec_cluster}: Failed to set BIOS settings for host {cspec_fqdn} at {bmc_host}. Check pvcbootstrapd logs and reset this host's BMC to retry.")
|
||||
logger.error(f"Cluster {cspec_cluster}: Failed to set BIOS settings for host {cspec_fqdn} at {bmc_host}: {e}")
|
||||
logger.error("Aborting Redfish configuration; reset BMC to retry.")
|
||||
del session
|
||||
return
|
||||
if len(cspec_node["bmc"].get("bios_settings", {}).items()) > 0:
|
||||
logger.info("Adjusting BIOS settings...")
|
||||
try:
|
||||
bios_root = system_detail.get("Bios", {}).get("@odata.id")
|
||||
if bios_root is not None:
|
||||
bios_detail = session.get(bios_root)
|
||||
bios_attributes = list(bios_detail["Attributes"].keys())
|
||||
for setting, value in cspec_node["bmc"].get("bios_settings", {}).items():
|
||||
if setting not in bios_attributes:
|
||||
continue
|
||||
payload = {"Attributes": {setting: value}}
|
||||
session.patch(f"{bios_root}/Settings", payload)
|
||||
except Exception as e:
|
||||
notifications.send_webhook(config, "failure", f"Cluster {cspec_cluster}: Failed to set BIOS settings for host {cspec_fqdn} at {bmc_host}. Check pvcbootstrapd logs and reset this host's BMC to retry.")
|
||||
logger.error(f"Cluster {cspec_cluster}: Failed to set BIOS settings for host {cspec_fqdn} at {bmc_host}: {e}")
|
||||
logger.error("Aborting Redfish configuration; reset BMC to retry.")
|
||||
del session
|
||||
return
|
||||
|
||||
# Adjust any Manager settings
|
||||
logger.info("Adjusting Manager settings...")
|
||||
try:
|
||||
mgrattribute_root = f"{manager_root}/Attributes"
|
||||
mgrattribute_detail = session.get(mgrattribute_root)
|
||||
mgrattribute_attributes = list(mgrattribute_detail["Attributes"].keys())
|
||||
for setting, value in cspec_node["bmc"].get("manager_settings", {}).items():
|
||||
if setting not in mgrattribute_attributes:
|
||||
continue
|
||||
|
||||
payload = {"Attributes": {setting: value}}
|
||||
session.patch(mgrattribute_root, payload)
|
||||
except Exception as e:
|
||||
notifications.send_webhook(config, "failure", f"Cluster {cspec_cluster}: Failed to set BMC settings for host {cspec_fqdn} at {bmc_host}. Check pvcbootstrapd logs and reset this host's BMC to retry.")
|
||||
logger.error(f"Cluster {cspec_cluster}: Failed to set BMC settings for host {cspec_fqdn} at {bmc_host}: {e}")
|
||||
logger.error("Aborting Redfish configuration; reset BMC to retry.")
|
||||
del session
|
||||
return
|
||||
if len(cspec_node["bmc"].get("manager_settings", {}).items()) > 0:
|
||||
logger.info("Adjusting Manager settings...")
|
||||
try:
|
||||
mgrattribute_root = f"{manager_root}/Attributes"
|
||||
mgrattribute_detail = session.get(mgrattribute_root)
|
||||
mgrattribute_attributes = list(mgrattribute_detail["Attributes"].keys())
|
||||
for setting, value in cspec_node["bmc"].get("manager_settings", {}).items():
|
||||
if setting not in mgrattribute_attributes:
|
||||
continue
|
||||
payload = {"Attributes": {setting: value}}
|
||||
session.patch(mgrattribute_root, payload)
|
||||
except Exception as e:
|
||||
notifications.send_webhook(config, "failure", f"Cluster {cspec_cluster}: Failed to set BMC settings for host {cspec_fqdn} at {bmc_host}. Check pvcbootstrapd logs and reset this host's BMC to retry.")
|
||||
logger.error(f"Cluster {cspec_cluster}: Failed to set BMC settings for host {cspec_fqdn} at {bmc_host}: {e}")
|
||||
logger.error("Aborting Redfish configuration; reset BMC to retry.")
|
||||
del session
|
||||
return
|
||||
|
||||
# Set boot override to Pxe for the installer boot
|
||||
logger.info("Setting temporary PXE boot...")
|
||||
@ -952,7 +962,7 @@ def redfish_init(config, cspec, data):
|
||||
node = db.get_node(config, cspec_cluster, name=cspec_hostname)
|
||||
|
||||
# Graceful shutdown of the machine
|
||||
notifications.send_webhook(config, "info", f"Cluster {cspec_cluster}: Powering off host {cspec_fqdn}")
|
||||
notifications.send_webhook(config, "info", f"Cluster {cspec_cluster}: Shutting down host {cspec_fqdn}")
|
||||
set_power_state(session, system_root, redfish_vendor, "GracefulShutdown")
|
||||
system_power_state = "On"
|
||||
while system_power_state != "Off":
|
||||
@ -964,6 +974,8 @@ def redfish_init(config, cspec, data):
|
||||
# Turn off the indicator to indicate bootstrap has completed
|
||||
set_indicator_state(session, system_root, redfish_vendor, "off")
|
||||
|
||||
notifications.send_webhook(config, "success", f"Cluster {cspec_cluster}: Powered off host {cspec_fqdn}")
|
||||
|
||||
# We must delete the session
|
||||
del session
|
||||
return
|
||||
|
@ -21,16 +21,18 @@
|
||||
|
||||
import os.path
|
||||
import shutil
|
||||
from subprocess import run
|
||||
|
||||
import pvcbootstrapd.lib.notifications as notifications
|
||||
|
||||
|
||||
def build_tftp_repository(config):
|
||||
# Generate an installer config
|
||||
build_cmd = f"{config['ansible_path']}/pvc-installer/buildpxe.sh -o {config['tftp_root_path']} -u {config['deploy_username']}"
|
||||
print(f"Building TFTP contents via pvc-installer command: {build_cmd}")
|
||||
notifications.send_webhook(config, "begin", f"Building TFTP contents via pvc-installer command: {build_cmd}")
|
||||
os.system(build_cmd)
|
||||
build_cmd = [ f"{config['ansible_path']}/pvc-installer/buildpxe.sh", "-o", config['tftp_root_path'], "-u", config['deploy_username'], "-m", config["repo_mirror"] ]
|
||||
print(f"Building TFTP contents via pvc-installer command: {' '.join(build_cmd)}")
|
||||
notifications.send_webhook(config, "begin", f"Building TFTP contents via pvc-installer command: {' '.join(build_cmd)}")
|
||||
ret = run(build_cmd)
|
||||
return True if ret.returncode == 0 else False
|
||||
|
||||
|
||||
def init_tftp(config):
|
||||
@ -43,8 +45,13 @@ def init_tftp(config):
|
||||
os.makedirs(config["tftp_root_path"])
|
||||
os.makedirs(config["tftp_host_path"])
|
||||
shutil.copyfile(
|
||||
f"{config['ansible_keyfile']}.pub", f"{config['tftp_root_path']}/keys.txt"
|
||||
f"{config['ansible_key_file']}.pub", f"{config['tftp_root_path']}/keys.txt"
|
||||
)
|
||||
|
||||
build_tftp_repository(config)
|
||||
notifications.send_webhook(config, "success", "First run: successfully initialized TFTP root and contents")
|
||||
result = build_tftp_repository(config)
|
||||
if result:
|
||||
print("First run: successfully initialized TFTP root and contents")
|
||||
notifications.send_webhook(config, "success", "First run: successfully initialized TFTP root and contents")
|
||||
else:
|
||||
print("First run: failed initialized TFTP root and contents; see logs above")
|
||||
notifications.send_webhook(config, "failure", "First run: failed initialized TFTP root and contents; check pvcbootstrapd logs")
|
||||
|
@ -95,12 +95,35 @@ if [[ -z ${deploy_username} ]]; then
|
||||
fi
|
||||
echo
|
||||
|
||||
echo "Please enter an upstream Debian mirror (hostname+directory without scheme) to use (e.g. ftp.debian.org/debian):"
|
||||
echo -n "[ftp.debian.org/debian] > "
|
||||
read upstream_mirror
|
||||
if [[ -z ${upstream_mirror} ]]; then
|
||||
upstream_mirror="ftp.debian.org/debian"
|
||||
fi
|
||||
echo
|
||||
|
||||
echo "Please enter the default Debian release for new clusters (e.g. 'bullseye', 'bookworm'):"
|
||||
echo -n "[bookworm] > "
|
||||
read debian_release
|
||||
if [[ -z ${debian_release} ]]; then
|
||||
debian_release="bookworm"
|
||||
fi
|
||||
echo
|
||||
|
||||
echo "Proceeding with setup!"
|
||||
echo
|
||||
|
||||
echo "Installing APT dependencies..."
|
||||
sudo apt-get update
|
||||
sudo apt-get install --yes vlan iptables dnsmasq redis python3 python3-pip python3-requests sqlite3 celery pxelinux syslinux-common live-build debootstrap uuid-runtime qemu-user-static
|
||||
sudo apt-get install --yes vlan iptables dnsmasq redis python3 python3-pip python3-requests python3-git python3-ansible-runner python3-filelock python3-flask python3-paramiko python3-flask-restful python3-gevent python3-redis sqlite3 celery pxelinux syslinux-common live-build debootstrap uuid-runtime qemu-user-static apt-cacher-ng
|
||||
|
||||
echo "Configuring apt-cacher-ng..."
|
||||
sudo systemctl enable --now apt-cacher-ng
|
||||
if ! grep -q ${upstream_mirror} /etc/apt-cacher-ng/backends_debian; then
|
||||
echo "http://${upstream_mirror}" | sudo tee -a /etc/apt-cacher-ng/backends_debian &>/dev/null
|
||||
sudo systemctl restart apt-cacher-ng
|
||||
fi
|
||||
|
||||
echo "Configuring dnsmasq..."
|
||||
sudo systemctl disable --now dnsmasq
|
||||
@ -115,7 +138,7 @@ echo "Installing pvcbootstrapd..."
|
||||
cp -a bootstrap-daemon ${root_directory}/pvcbootstrapd
|
||||
|
||||
echo "Installing PIP dependencies..."
|
||||
sudo pip3 install -r ${root_directory}/pvcbootstrapd/requirements.txt
|
||||
sudo pip3 install --break-system-packages -r ${root_directory}/pvcbootstrapd/requirements.txt
|
||||
|
||||
echo "Determining IP addresses..."
|
||||
bootstrap_address="$( awk -F'.' '{ print $1"."$2"."$3".1" }' <<<"${bootstrap_network}" )"
|
||||
@ -131,6 +154,8 @@ sed -i "s|BOOTSTRAP_DHCPSTART|${bootstrap_dhcpstart}|" ${root_directory}/pvcboot
|
||||
sed -i "s|BOOTSTRAP_DHCPEND|${bootstrap_dhcpend}|" ${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml
|
||||
sed -i "s|GIT_REMOTE|${git_remote}|" ${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml
|
||||
sed -i "s|GIT_BRANCH|${git_branch}|" ${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml
|
||||
sed -i "s|UPSTREAM_MIRROR|${upstream_mirror}|" ${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml
|
||||
sed -i "s|DEBIAN_RELEASE|${debian_release}|" ${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml
|
||||
|
||||
echo "Creating network configuration for interface ${bootstrap_interface} (is vLAN? ${is_bootstrap_interface_vlan})..."
|
||||
if [[ "${is_bootstrap_interface_vlan}" == "yes" ]]; then
|
||||
@ -241,6 +266,12 @@ case ${start_flag} in
|
||||
;;
|
||||
*)
|
||||
echo
|
||||
if [[ "${is_bootstrap_interface_vlan}" == "yes" ]]; then
|
||||
sudo ifup vlan${bootstrap_vlan}
|
||||
else
|
||||
sudo ifup ${bootstrap_interface}
|
||||
fi
|
||||
sudo service apt-cacher-ng restart
|
||||
export PVCD_CONFIG_FILE="${root_directory}/pvcbootstrapd/pvcbootstrapd.yaml"
|
||||
${root_directory}/pvcbootstrapd/pvcbootstrapd.py --init-only
|
||||
;;
|
||||
|
Reference in New Issue
Block a user