Compare commits
270 Commits
31a5c8801f
...
0.9.56
Author | SHA1 | Date | |
---|---|---|---|
c3bc55eff8 | |||
6c58d52fa1 | |||
666e02fbfd | |||
46dde055c4 | |||
ef437c3dbf | |||
bd2208e8f6 | |||
62d5ff11df | |||
0019881cfa | |||
d46133802b | |||
fcadde057e | |||
2608f38d64 | |||
89f05ced3f | |||
729481126c | |||
41eccb9c7d | |||
e550e39a5a | |||
dff156b2b0 | |||
1c4fb80d1f | |||
ec7beb08cc | |||
3a180193ee | |||
e26ff8a975 | |||
6276414702 | |||
a34d64a71b | |||
71297e0179 | |||
45c9909428 | |||
7268592c87 | |||
726d0a562b | |||
39e1fc50ed | |||
7a3870fc44 | |||
bffab7a5a1 | |||
6cbaeb5dc8 | |||
58ce133c8d | |||
43feb33caa | |||
3a5d8c61da | |||
1e0b502250 | |||
fe17d28385 | |||
8aaac33056 | |||
cc7952c232 | |||
16915ed507 | |||
2c624ceb2c | |||
da85480488 | |||
47b0704555 | |||
7c49967586 | |||
e3f96ac87e | |||
4df70cf086 | |||
f1df1cfe93 | |||
5942aa50fc | |||
096bcdfd75 | |||
239c392892 | |||
172d0a86e4 | |||
d8e57a26c5 | |||
9b499b9f48 | |||
881550b610 | |||
2a21d48128 | |||
8d0f26ff7a | |||
bcabd7d079 | |||
05a316cdd6 | |||
4b36753f27 | |||
171f6ac9ed | |||
645b525ad7 | |||
ec559aec0d | |||
71ffd5a191 | |||
2739c27299 | |||
56129a3636 | |||
932b3c55a3 | |||
92e2ff7449 | |||
d8d3feee22 | |||
b1357cafdb | |||
f8cdcb30ba | |||
51ad2058ed | |||
c401a1f655 | |||
7a40c7a55b | |||
8027a6efdc | |||
3801fcc07b | |||
c741900baf | |||
464f0e0356 | |||
cea8832f90 | |||
5807351405 | |||
d6ca74376a | |||
413100a147 | |||
4d698be34b | |||
53aed0a735 | |||
ea709f573f | |||
1142454934 | |||
bbfad340a1 | |||
c73939e1c5 | |||
25fe45dd28 | |||
58d57d7037 | |||
00d2c67c41 | |||
67131de4f6 | |||
abc23ebb18 | |||
9f122e916f | |||
3ce4d90693 | |||
6ccd19e636 | |||
d8689e6eaa | |||
bc49b5eca2 | |||
8470dfaa29 | |||
f164d898c1 | |||
195f31501c | |||
a8899a1d66 | |||
817dffcf30 | |||
eda2a57a73 | |||
135d28e60b | |||
e7d7378bae | |||
799c3e8d5d | |||
d0ec24f690 | |||
6e9fcd38a3 | |||
f51f9fc4c8 | |||
a6dcffc737 | |||
364c190106 | |||
ea19af6494 | |||
7069d3237c | |||
619c3f7ff5 | |||
8a75bb3011 | |||
a817c3e678 | |||
0cc3f2deab | |||
21b4bbe51a | |||
87ec31c023 | |||
0d857d5ab8 | |||
006f40f195 | |||
5f193a6134 | |||
78faa90139 | |||
23b1501f40 | |||
66bfad3109 | |||
eee5c25d6f | |||
ff4fc18a60 | |||
ac885b855a | |||
b9c30baf80 | |||
9b12cc0236 | |||
c41664d2da | |||
3779bc960e | |||
5c620262e9 | |||
6b88fbd1e3 | |||
a50c8e6a4d | |||
7d6e4353f1 | |||
bf30b31db6 | |||
70bd601dc1 | |||
2e7b9b28b3 | |||
12eef58d42 | |||
f2e6892fd2 | |||
91fb9e1241 | |||
d87bea4159 | |||
3a6f442856 | |||
dfca998adf | |||
55f397a347 | |||
dfebb2d3e5 | |||
e88147db4a | |||
b8204d89ac | |||
fe73dfbdc9 | |||
8f906c1f81 | |||
2d9fb9688d | |||
fb84685c2a | |||
032ba44d9c | |||
b7761877e7 | |||
1fe07640b3 | |||
b8d843ebe4 | |||
95d983ddff | |||
4c5da1b6a8 | |||
be6b1e02e3 | |||
ec2a72ed4b | |||
b06e327add | |||
d1f32d2b9c | |||
3f78ca1cc9 | |||
e866335918 | |||
221494ed1b | |||
f13cc04b89 | |||
4ed537ee3b | |||
95e01f38d5 | |||
3122d73bf5 | |||
7ed8ef179c | |||
caead02b2a | |||
87bc5f93e6 | |||
203893559e | |||
2c51bb0705 | |||
46d3daf686 | |||
e9d05aa24e | |||
d2c18d7b46 | |||
6ce28c43af | |||
87cda72ca9 | |||
8f71a6d2f6 | |||
c45f8f5bd5 | |||
24de0f4189 | |||
3690a2c1e0 | |||
50d8aa0586 | |||
db6e65712d | |||
cf8e16543c | |||
1a4fcdcc2d | |||
9a71db0800 | |||
6ee4c55071 | |||
c27359c4bf | |||
46078932c3 | |||
c89699bc6f | |||
1b9507e4f5 | |||
3db7ac48f4 | |||
1830ec6465 | |||
bdb9db8375 | |||
c61d7bc313 | |||
c0f7ba0125 | |||
761032b321 | |||
3566e13e79 | |||
6b324029cf | |||
13eeabf44b | |||
d86768d3d0 | |||
a167757600 | |||
a95d9680ac | |||
63962f10ba | |||
a7a681d92a | |||
da9248cfa2 | |||
aa035a61a7 | |||
7c8ba56561 | |||
bba73980de | |||
32b3af697c | |||
7c122ac921 | |||
0dbf139706 | |||
c909beaf6d | |||
2da49297d2 | |||
0ff9a6b8c4 | |||
28377178d2 | |||
e06b114c48 | |||
0058f19d88 | |||
056cf3740d | |||
58f174b87b | |||
37b98fd54f | |||
f83a345bfe | |||
ce06e4d81b | |||
23977b04fc | |||
bb1cca522f | |||
9a4dce4e4c | |||
f6f6f07488 | |||
142c999ce8 | |||
1de069298c | |||
55221b3d97 | |||
0d72798814 | |||
3638efc77e | |||
c2c888d684 | |||
febef2e406 | |||
2a4f38e933 | |||
3b805cdc34 | |||
06f0f7ed91 | |||
fd040ab45a | |||
e23e2dd9bf | |||
ee4266f8ca | |||
0f02c5eaef | |||
075abec5fe | |||
3a1cbf8d01 | |||
a438a4155a | |||
65df807b09 | |||
d0f3e9e285 | |||
adc8a5a3bc | |||
df277edf1c | |||
772807deb3 | |||
58db537093 | |||
e71a6c90bf | |||
a8e9a56924 | |||
f3fb492633 | |||
e962743e51 | |||
46f1d761f6 | |||
be954c1625 | |||
fb46f5f9e9 | |||
694b8e85a0 | |||
eb321497ee | |||
5b81e59481 | |||
a4c0e0befd | |||
a18cef5f25 | |||
f6c5aa9992 | |||
ffa3dd5edb | |||
afb0359c20 | |||
afdf254297 | |||
42e776fac1 | |||
dae67a1b7b | |||
b86f8c1e09 |
11
CHANGELOG.md
11
CHANGELOG.md
@@ -1,5 +1,16 @@
|
||||
## PVC Changelog
|
||||
|
||||
###### [v0.9.56](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.56)
|
||||
|
||||
**Breaking Change**: Existing provisioner scripts are no longer valid; new example scripts are provided.
|
||||
**Breaking Change**: OVA profiles now require an `ova` or `default_ova` provisioner script (use example) to function.
|
||||
|
||||
* [API/Provisioner] Fundamentally revamps the provisioner script framework to provide more extensibility
|
||||
* [API/Provisioner] Adds example provisioner scripts for noop, ova, debootstrap, rinse, and pfsense
|
||||
* [API/Provisioner] Enforces the use of the ova provisioner script during new OVA uploads; existing uploads will not work
|
||||
* [Documentation] Updates the documentation around provisioner scripts and OVAs to reflect the above changes
|
||||
* [Node] Adds a new pvcautoready.service oneshot unit to replicate the on-boot-ready functionality of old pvc-flush.service unit
|
||||
|
||||
###### [v0.9.55](https://github.com/parallelvirtualcluster/pvc/releases/tag/v0.9.55)
|
||||
|
||||
* Fixes a problem with the literal eval handler in the provisioner (again)
|
||||
|
@@ -4,7 +4,7 @@
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc"><img alt="License" src="https://img.shields.io/github/license/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://github.com/psf/black"><img alt="Code style: Black" src="https://img.shields.io/badge/code%20style-black-000000.svg"/></a>
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc/releases"><img alt="Release" src="https://img.shields.io/github/release-pre/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://parallelvirtualcluster.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||
<a href="https://docs.parallelvirtualcluster.org/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||
</p>
|
||||
|
||||
## What is PVC?
|
||||
@@ -38,12 +38,12 @@ The core node and API daemons, as well as the CLI API client, are written in Pyt
|
||||
|
||||
## Getting Started
|
||||
|
||||
To get started with PVC, please see the [About](https://parallelvirtualcluster.readthedocs.io/en/latest/about/) page for general information about the project, and the [Getting Started](https://parallelvirtualcluster.readthedocs.io/en/latest/getting-started/) page for details on configuring your first cluster.
|
||||
To get started with PVC, please see the [About](https://docs.parallelvirtualcluster.org/en/latest/about/) page for general information about the project, and the [Getting Started](https://docs.parallelvirtualcluster.org/en/latest/getting-started/) page for details on configuring your first cluster.
|
||||
|
||||
|
||||
## Changelog
|
||||
|
||||
View the changelog in [CHANGELOG.md](https://github.com/parallelvirtualcluster/pvc/blob/master/CHANGELOG.md).
|
||||
View the changelog in [CHANGELOG.md](CHANGELOG.md).
|
||||
|
||||
|
||||
## Screenshots
|
||||
|
@@ -280,9 +280,12 @@ class VMBuilderScript(VMBuilder):
|
||||
from pvcapid.Daemon import config
|
||||
import daemon_lib.common as pvc_common
|
||||
import daemon_lib.ceph as pvc_ceph
|
||||
import os
|
||||
|
||||
# First loop: Create the destination disks
|
||||
print("Creating destination disk volumes")
|
||||
for volume in self.vm_data["volumes"]:
|
||||
print(f"Processing volume {volume['volume_name']}")
|
||||
with open_zk(config) as zkhandler:
|
||||
success, message = pvc_ceph.add_volume(
|
||||
zkhandler,
|
||||
@@ -297,7 +300,9 @@ class VMBuilderScript(VMBuilder):
|
||||
)
|
||||
|
||||
# Second loop: Map the destination disks
|
||||
print("Mapping destination disk volumes")
|
||||
for volume in self.vm_data["volumes"]:
|
||||
print(f"Processing volume {volume['volume_name']}")
|
||||
dst_volume_name = f"{self.vm_name}_{volume['disk_id']}"
|
||||
dst_volume = f"{volume['pool']}/{dst_volume_name}"
|
||||
|
||||
@@ -312,7 +317,9 @@ class VMBuilderScript(VMBuilder):
|
||||
raise ProvisioningError(f"Failed to map volume '{dst_volume}'.")
|
||||
|
||||
# Third loop: Map the source disks
|
||||
print("Mapping source disk volumes")
|
||||
for volume in self.vm_data["volumes"]:
|
||||
print(f"Processing volume {volume['volume_name']}")
|
||||
src_volume_name = volume["volume_name"]
|
||||
src_volume = f"{volume['pool']}/{src_volume_name}"
|
||||
|
||||
@@ -326,7 +333,16 @@ class VMBuilderScript(VMBuilder):
|
||||
if not success:
|
||||
raise ProvisioningError(f"Failed to map volume '{src_volume}'.")
|
||||
|
||||
# Fourth loop: Convert the source (usually VMDK) volume to the raw destination volume
|
||||
def install(self):
|
||||
"""
|
||||
install(): Perform the installation
|
||||
|
||||
Convert the mapped source volumes to the mapped destination volumes
|
||||
"""
|
||||
|
||||
# Run any imports first
|
||||
import daemon_lib.common as pvc_common
|
||||
|
||||
for volume in self.vm_data["volumes"]:
|
||||
src_volume_name = volume["volume_name"]
|
||||
src_volume = f"{volume['pool']}/{src_volume_name}"
|
||||
@@ -335,6 +351,9 @@ class VMBuilderScript(VMBuilder):
|
||||
dst_volume = f"{volume['pool']}/{dst_volume_name}"
|
||||
dst_devpath = f"/dev/rbd/{dst_volume}"
|
||||
|
||||
print(
|
||||
f"Converting {volume['volume_format']} {src_volume} at {src_devpath} to {dst_volume} at {dst_devpath}"
|
||||
)
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"qemu-img convert -C -f {volume['volume_format']} -O raw {src_devpath} {dst_devpath}"
|
||||
)
|
||||
@@ -343,15 +362,6 @@ class VMBuilderScript(VMBuilder):
|
||||
f"Failed to convert {volume['volume_format']} volume '{src_volume}' to raw volume '{dst_volume}' with qemu-img: {stderr}"
|
||||
)
|
||||
|
||||
def install(self):
|
||||
"""
|
||||
install(): Perform the installation
|
||||
|
||||
Noop for OVA deploys as no further tasks are performed.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def cleanup(self):
|
||||
"""
|
||||
cleanup(): Perform any cleanup required due to prepare()/install()
|
||||
@@ -361,6 +371,11 @@ class VMBuilderScript(VMBuilder):
|
||||
here, be warned that doing so might cause loops. Do this only if you really need to.
|
||||
"""
|
||||
|
||||
# Run any imports first
|
||||
from pvcapid.vmbuilder import open_zk
|
||||
from pvcapid.Daemon import config
|
||||
import daemon_lib.ceph as pvc_ceph
|
||||
|
||||
for volume in list(reversed(self.vm_data["volumes"])):
|
||||
src_volume_name = volume["volume_name"]
|
||||
src_volume = f"{volume['pool']}/{src_volume_name}"
|
||||
|
@@ -485,7 +485,7 @@ class VMBuilderScript(VMBuilder):
|
||||
f"Installing system with debootstrap: debootstrap --include={','.join(deb_packages)} {deb_release} {temporary_directory} {deb_mirror}"
|
||||
)
|
||||
os.system(
|
||||
"debootstrap --include={','.join(deb_packages)} {deb_release} {temporary_directory} {deb_mirror}"
|
||||
f"debootstrap --include={','.join(deb_packages)} {deb_release} {temporary_directory} {deb_mirror}"
|
||||
)
|
||||
|
||||
# Bind mount the devfs so we can grub-install later
|
||||
@@ -556,6 +556,71 @@ After=multi-user.target
|
||||
"""
|
||||
fh.write(data)
|
||||
|
||||
# Write the cloud-init configuration
|
||||
ci_cfg_file = "{}/etc/cloud/cloud.cfg".format(temporary_directory)
|
||||
with open(ci_cfg_file, "w") as fh:
|
||||
fh.write(
|
||||
"""
|
||||
disable_root: true
|
||||
|
||||
preserve_hostname: true
|
||||
|
||||
datasource:
|
||||
Ec2:
|
||||
metadata_urls: ["http://169.254.169.254:80"]
|
||||
max_wait: 30
|
||||
timeout: 30
|
||||
apply_full_imds_network_config: true
|
||||
|
||||
cloud_init_modules:
|
||||
- migrator
|
||||
- bootcmd
|
||||
- write-files
|
||||
- resizefs
|
||||
- set_hostname
|
||||
- update_hostname
|
||||
- update_etc_hosts
|
||||
- ca-certs
|
||||
- ssh
|
||||
|
||||
cloud_config_modules:
|
||||
- mounts
|
||||
- ssh-import-id
|
||||
- locale
|
||||
- set-passwords
|
||||
- grub-dpkg
|
||||
- apt-pipelining
|
||||
- apt-configure
|
||||
- package-update-upgrade-install
|
||||
- timezone
|
||||
- disable-ec2-metadata
|
||||
- runcmd
|
||||
|
||||
cloud_final_modules:
|
||||
- rightscale_userdata
|
||||
- scripts-per-once
|
||||
- scripts-per-boot
|
||||
- scripts-per-instance
|
||||
- scripts-user
|
||||
- ssh-authkey-fingerprints
|
||||
- keys-to-console
|
||||
- phone-home
|
||||
- final-message
|
||||
- power-state-change
|
||||
|
||||
system_info:
|
||||
distro: debian
|
||||
paths:
|
||||
cloud_dir: /var/lib/cloud/
|
||||
templates_dir: /etc/cloud/templates/
|
||||
upstart_dir: /etc/init/
|
||||
package_mirrors:
|
||||
- arches: [default]
|
||||
failsafe:
|
||||
primary: {deb_mirror}
|
||||
"""
|
||||
).format(deb_mirror=deb_mirror)
|
||||
|
||||
# Due to device ordering within the Libvirt XML configuration, the first Ethernet interface
|
||||
# will always be on PCI bus ID 2, hence the name "ens2".
|
||||
# Write a DHCP stanza for ens2
|
||||
@@ -626,9 +691,6 @@ GRUB_DISABLE_LINUX_UUID=false
|
||||
# Debian cloud images are affected, so who knows.
|
||||
os.system("systemctl enable cloud-init.target")
|
||||
|
||||
# Unmount the bound devfs
|
||||
os.system("umount {}/dev".format(temporary_directory))
|
||||
|
||||
def cleanup(self):
|
||||
"""
|
||||
cleanup(): Perform any cleanup required due to prepare()/install()
|
||||
@@ -650,6 +712,9 @@ GRUB_DISABLE_LINUX_UUID=false
|
||||
# Set the tempdir we used in the prepare() and install() steps
|
||||
temp_dir = "/tmp/target"
|
||||
|
||||
# Unmount the bound devfs
|
||||
os.system("umount {}/dev".format(temporary_directory))
|
||||
|
||||
# Use this construct for reversing the list, as the normal reverse() messes with the list
|
||||
for volume in list(reversed(self.vm_data["volumes"])):
|
||||
dst_volume_name = f"{self.vm_name}_{volume['disk_id']}"
|
||||
|
918
api-daemon/provisioner/examples/script/5-pfsense.py
Normal file
918
api-daemon/provisioner/examples/script/5-pfsense.py
Normal file
@@ -0,0 +1,918 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# 6-pfsense.py - PVC Provisioner example script for pfSense install
|
||||
# Part of the Parallel Virtual Cluster (PVC) system
|
||||
#
|
||||
# Copyright (C) 2018-2022 Joshua M. Boniface <joshua@boniface.me>
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, version 3.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
#
|
||||
###############################################################################
|
||||
|
||||
# This script provides an example of a PVC provisioner script. It will create a
|
||||
# standard VM config, download and configure pfSense with Packer, and then copy
|
||||
# the resulting raw disk image into the first RBD volume ready for first boot.
|
||||
#
|
||||
# This script has 4 custom arguments and will error if they are not properly configured:
|
||||
# pfsense_wan_iface: the (internal) interface name for the WAN, usually "vtnet0" or similar
|
||||
# pfsense_wan_dhcp: if set to any value (even empty), will use DHCP for the WAN interface
|
||||
# and obsolete the following arguments
|
||||
# pfsense_wan_address: the static IPv4 address (including CIDR netmask) of the WAN interface
|
||||
# pfsense_wan_gateway: the default gateway IPv4 address of the WAN interface
|
||||
#
|
||||
# In addition, the following standard arguments can be utilized:
|
||||
# vm_fqdn: Sets an FQDN (hostname + domain); if unspecified, defaults to `vm_name` as the
|
||||
# hostname with no domain set.
|
||||
#
|
||||
# The resulting pfSense instance will use the default "root"/"pfsense" credentials and
|
||||
# will support both serial and VNC interfaces; boot messages will only show on serial.
|
||||
# SLAAC will be used for IPv6 on WAN in addition to the specified IPv4 configuration.
|
||||
# A set of default-permit rules on the WAN interface are included to allow management on the
|
||||
# WAN side, and these should be modified or removed once the system is configured.
|
||||
# Finally, the Web Configurator is set to use HTTP only.
|
||||
#
|
||||
# Other than the above specified values, the new pfSense instance will be completely
|
||||
# unconfigured and must then be adjusted as needed via the Web Configurator ASAP to ensure
|
||||
# the system is not compromised.
|
||||
#
|
||||
# NOTE: Due to the nature of the Packer provisioning, this script will use approximately
|
||||
# 2GB of RAM for tmpfs during the provisioning. Be careful on heavily-loaded nodes.
|
||||
|
||||
# This script can thus be used as an example or reference implementation of a
|
||||
# PVC provisioner script and expanded upon as required.
|
||||
# *** READ THIS SCRIPT THOROUGHLY BEFORE USING TO UNDERSTAND HOW IT WORKS. ***
|
||||
|
||||
# A script must implement the class "VMBuilderScript" which extends "VMBuilder",
|
||||
# providing the 5 functions indicated. Detailed explanation of the role of each
|
||||
# function is provided in context of the example; see the other examples for
|
||||
# more potential uses.
|
||||
|
||||
# Within the VMBuilderScript class, several common variables are exposed through
|
||||
# the parent VMBuilder class:
|
||||
# self.vm_name: The name of the VM from PVC's perspective
|
||||
# self.vm_id: The VM ID (numerical component of the vm_name) from PVC's perspective
|
||||
# self.vm_uuid: An automatically-generated UUID for the VM
|
||||
# self.vm_profile: The PVC provisioner profile name used for the VM
|
||||
# self.vm_data: A dictionary of VM data collected by the provisioner; as an example:
|
||||
# {
|
||||
# "ceph_monitor_list": [
|
||||
# "hv1.pvcstorage.tld",
|
||||
# "hv2.pvcstorage.tld",
|
||||
# "hv3.pvcstorage.tld"
|
||||
# ],
|
||||
# "ceph_monitor_port": "6789",
|
||||
# "ceph_monitor_secret": "96721723-8650-4a72-b8f6-a93cd1a20f0c",
|
||||
# "mac_template": null,
|
||||
# "networks": [
|
||||
# {
|
||||
# "eth_bridge": "vmbr1001",
|
||||
# "id": 72,
|
||||
# "network_template": 69,
|
||||
# "vni": "1001"
|
||||
# },
|
||||
# {
|
||||
# "eth_bridge": "vmbr101",
|
||||
# "id": 73,
|
||||
# "network_template": 69,
|
||||
# "vni": "101"
|
||||
# }
|
||||
# ],
|
||||
# "script": [contents of this file]
|
||||
# "script_arguments": {
|
||||
# "deb_mirror": "http://ftp.debian.org/debian",
|
||||
# "deb_release": "bullseye"
|
||||
# },
|
||||
# "system_architecture": "x86_64",
|
||||
# "system_details": {
|
||||
# "id": 78,
|
||||
# "migration_method": "live",
|
||||
# "name": "small",
|
||||
# "node_autostart": false,
|
||||
# "node_limit": null,
|
||||
# "node_selector": null,
|
||||
# "ova": null,
|
||||
# "serial": true,
|
||||
# "vcpu_count": 2,
|
||||
# "vnc": false,
|
||||
# "vnc_bind": null,
|
||||
# "vram_mb": 2048
|
||||
# },
|
||||
# "volumes": [
|
||||
# {
|
||||
# "disk_id": "sda",
|
||||
# "disk_size_gb": 4,
|
||||
# "filesystem": "ext4",
|
||||
# "filesystem_args": "-L=root",
|
||||
# "id": 9,
|
||||
# "mountpoint": "/",
|
||||
# "pool": "vms",
|
||||
# "source_volume": null,
|
||||
# "storage_template": 67
|
||||
# },
|
||||
# {
|
||||
# "disk_id": "sdb",
|
||||
# "disk_size_gb": 4,
|
||||
# "filesystem": "ext4",
|
||||
# "filesystem_args": "-L=var",
|
||||
# "id": 10,
|
||||
# "mountpoint": "/var",
|
||||
# "pool": "vms",
|
||||
# "source_volume": null,
|
||||
# "storage_template": 67
|
||||
# },
|
||||
# {
|
||||
# "disk_id": "sdc",
|
||||
# "disk_size_gb": 4,
|
||||
# "filesystem": "ext4",
|
||||
# "filesystem_args": "-L=log",
|
||||
# "id": 11,
|
||||
# "mountpoint": "/var/log",
|
||||
# "pool": "vms",
|
||||
# "source_volume": null,
|
||||
# "storage_template": 67
|
||||
# }
|
||||
# ]
|
||||
# }
|
||||
#
|
||||
# Any other information you may require must be obtained manually.
|
||||
|
||||
# WARNING:
|
||||
#
|
||||
# For safety reasons, the script runs in a modified chroot. It will have full access to
|
||||
# the entire / (root partition) of the hypervisor, but read-only. In addition it has
|
||||
# access to /dev, /sys, /run, and a fresh /tmp to write to; use /tmp/target (as
|
||||
# convention) as the destination for any mounting of volumes and installation.
|
||||
# Of course, in addition to this safety, it is VERY IMPORTANT to be aware that this
|
||||
# script runs AS ROOT ON THE HYPERVISOR SYSTEM. You should never allow arbitrary,
|
||||
# untrusted users the ability to add provisioning scripts even with this safeguard,
|
||||
# since they could still do destructive things to /dev and the like!
|
||||
|
||||
|
||||
# This import is always required here, as VMBuilder is used by the VMBuilderScript class
|
||||
# and ProvisioningError is the primary exception that should be raised within the class.
|
||||
from pvcapid.vmbuilder import VMBuilder, ProvisioningError
|
||||
|
||||
|
||||
# Set up some variables for later; if you frequently use these tools, you might benefit from
|
||||
# a local mirror, or store them on the hypervisor and adjust the prepare() tasks to use
|
||||
# those local copies instead.
|
||||
PACKER_VERSION = "1.8.2"
|
||||
PACKER_URL = f"https://releases.hashicorp.com/packer/{PACKER_VERSION}/packer_{PACKER_VERSION}_linux_amd64.zip"
|
||||
PFSENSE_VERSION = "2.5.2"
|
||||
PFSENSE_ISO_URL = f"https://atxfiles.netgate.com/mirror/downloads/pfSense-CE-{PFSENSE_VERSION}-RELEASE-amd64.iso.gz"
|
||||
|
||||
|
||||
# The VMBuilderScript class must be named as such, and extend VMBuilder.
|
||||
class VMBuilderScript(VMBuilder):
|
||||
def setup(self):
|
||||
"""
|
||||
setup(): Perform special setup steps or validation before proceeding
|
||||
|
||||
Fetches Packer and the pfSense installer ISO, and prepares the Packer config.
|
||||
"""
|
||||
|
||||
# Run any imports first; as shown here, you can import anything from the PVC
|
||||
# namespace, as well as (of course) the main Python namespaces
|
||||
import daemon_lib.common as pvc_common
|
||||
import os
|
||||
|
||||
# Ensure that our required runtime variables are defined
|
||||
|
||||
if self.vm_data["script_arguments"].get("pfsense_wan_iface") is None:
|
||||
raise ProvisioningError(
|
||||
"Required script argument 'pfsense_wan_iface' not provided"
|
||||
)
|
||||
|
||||
if self.vm_data["script_arguments"].get("pfsense_wan_dhcp") is None:
|
||||
for argument in [
|
||||
"pfsense_wan_address",
|
||||
"pfsense_wan_gateway",
|
||||
]:
|
||||
if self.vm_data["script_arguments"].get(argument) is None:
|
||||
raise ProvisioningError(
|
||||
f"Required script argument '{argument}' not provided"
|
||||
)
|
||||
|
||||
# Ensure we have all dependencies intalled on the provisioner system
|
||||
for dependency in "wget", "unzip", "gzip":
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(f"which {dependency}")
|
||||
if retcode:
|
||||
# Raise a ProvisioningError for any exception; the provisioner will handle
|
||||
# this gracefully and properly, avoiding dangling mounts, RBD maps, etc.
|
||||
raise ProvisioningError(
|
||||
f"Failed to find critical dependency: {dependency}"
|
||||
)
|
||||
|
||||
# Create a temporary directory to use for Packer binaries/scripts
|
||||
packer_temp_dir = "/tmp/packer"
|
||||
|
||||
if not os.path.isdir(packer_temp_dir):
|
||||
os.mkdir(f"{packer_temp_dir}")
|
||||
os.mkdir(f"{packer_temp_dir}/http")
|
||||
os.mkdir(f"{packer_temp_dir}/dl")
|
||||
|
||||
def create(self):
|
||||
"""
|
||||
create(): Create the VM libvirt schema definition
|
||||
|
||||
This step *must* return a fully-formed Libvirt XML document as a string or the
|
||||
provisioning task will fail.
|
||||
|
||||
This example leverages the built-in libvirt_schema objects provided by PVC; these
|
||||
can be used as-is, or replaced with your own schema(s) on a per-script basis.
|
||||
|
||||
Even though we noop the rest of the script, we still create a fully-formed libvirt
|
||||
XML document here as a demonstration.
|
||||
"""
|
||||
|
||||
# Run any imports first
|
||||
import pvcapid.libvirt_schema as libvirt_schema
|
||||
import datetime
|
||||
import random
|
||||
|
||||
# Create the empty schema document that we will append to and return at the end
|
||||
schema = ""
|
||||
|
||||
# Prepare a description based on the VM profile
|
||||
description = (
|
||||
f"PVC provisioner @ {datetime.datetime.now()}, profile '{self.vm_profile}'"
|
||||
)
|
||||
|
||||
# Format the header
|
||||
schema += libvirt_schema.libvirt_header.format(
|
||||
vm_name=self.vm_name,
|
||||
vm_uuid=self.vm_uuid,
|
||||
vm_description=description,
|
||||
vm_memory=self.vm_data["system_details"]["vram_mb"],
|
||||
vm_vcpus=self.vm_data["system_details"]["vcpu_count"],
|
||||
vm_architecture=self.vm_data["system_architecture"],
|
||||
)
|
||||
|
||||
# Add the disk devices
|
||||
monitor_list = self.vm_data["ceph_monitor_list"]
|
||||
monitor_port = self.vm_data["ceph_monitor_port"]
|
||||
monitor_secret = self.vm_data["ceph_monitor_secret"]
|
||||
|
||||
for volume in self.vm_data["volumes"]:
|
||||
schema += libvirt_schema.devices_disk_header.format(
|
||||
ceph_storage_secret=monitor_secret,
|
||||
disk_pool=volume["pool"],
|
||||
vm_name=self.vm_name,
|
||||
disk_id=volume["disk_id"],
|
||||
)
|
||||
for monitor in monitor_list:
|
||||
schema += libvirt_schema.devices_disk_coordinator.format(
|
||||
coordinator_name=monitor,
|
||||
coordinator_ceph_mon_port=monitor_port,
|
||||
)
|
||||
schema += libvirt_schema.devices_disk_footer
|
||||
|
||||
# Add the special vhostmd device for hypervisor information inside the VM
|
||||
schema += libvirt_schema.devices_vhostmd
|
||||
|
||||
# Add the network devices
|
||||
network_id = 0
|
||||
for network in self.vm_data["networks"]:
|
||||
vm_id_hex = "{:x}".format(int(self.vm_id % 16))
|
||||
net_id_hex = "{:x}".format(int(network_id % 16))
|
||||
|
||||
if self.vm_data.get("mac_template") is not None:
|
||||
mac_prefix = "52:54:01"
|
||||
macgen_template = self.vm_data["mac_template"]
|
||||
eth_macaddr = macgen_template.format(
|
||||
prefix=mac_prefix, vmid=vm_id_hex, netid=net_id_hex
|
||||
)
|
||||
else:
|
||||
mac_prefix = "52:54:00"
|
||||
random_octet_A = "{:x}".format(random.randint(16, 238))
|
||||
random_octet_B = "{:x}".format(random.randint(16, 238))
|
||||
random_octet_C = "{:x}".format(random.randint(16, 238))
|
||||
|
||||
macgen_template = "{prefix}:{octetA}:{octetB}:{octetC}"
|
||||
eth_macaddr = macgen_template.format(
|
||||
prefix=mac_prefix,
|
||||
octetA=random_octet_A,
|
||||
octetB=random_octet_B,
|
||||
octetC=random_octet_C,
|
||||
)
|
||||
|
||||
schema += libvirt_schema.devices_net_interface.format(
|
||||
eth_macaddr=eth_macaddr,
|
||||
eth_bridge=network["eth_bridge"],
|
||||
)
|
||||
|
||||
network_id += 1
|
||||
|
||||
# Add default devices
|
||||
schema += libvirt_schema.devices_default
|
||||
|
||||
# Add serial device
|
||||
if self.vm_data["system_details"]["serial"]:
|
||||
schema += libvirt_schema.devices_serial.format(vm_name=self.vm_name)
|
||||
|
||||
# Add VNC device
|
||||
if self.vm_data["system_details"]["vnc"]:
|
||||
if self.vm_data["system_details"]["vnc_bind"]:
|
||||
vm_vnc_bind = self.vm_data["system_details"]["vnc_bind"]
|
||||
else:
|
||||
vm_vnc_bind = "127.0.0.1"
|
||||
|
||||
vm_vncport = 5900
|
||||
vm_vnc_autoport = "yes"
|
||||
|
||||
schema += libvirt_schema.devices_vnc.format(
|
||||
vm_vncport=vm_vncport,
|
||||
vm_vnc_autoport=vm_vnc_autoport,
|
||||
vm_vnc_bind=vm_vnc_bind,
|
||||
)
|
||||
|
||||
# Add SCSI controller
|
||||
schema += libvirt_schema.devices_scsi_controller
|
||||
|
||||
# Add footer
|
||||
schema += libvirt_schema.libvirt_footer
|
||||
|
||||
return schema
|
||||
|
||||
def prepare(self):
|
||||
"""
|
||||
prepare(): Prepare any disks/volumes for the install() step
|
||||
"""
|
||||
|
||||
# Run any imports first; as shown here, you can import anything from the PVC
|
||||
# namespace, as well as (of course) the main Python namespaces
|
||||
from pvcapid.vmbuilder import open_zk
|
||||
from pvcapid.Daemon import config
|
||||
import daemon_lib.common as pvc_common
|
||||
import daemon_lib.ceph as pvc_ceph
|
||||
import json
|
||||
import os
|
||||
|
||||
packer_temp_dir = "/tmp/packer"
|
||||
|
||||
# Download pfSense image file to temporary target directory
|
||||
print(f"Downloading pfSense ISO image from {PFSENSE_ISO_URL}")
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"wget --output-document={packer_temp_dir}/dl/pfsense.iso.gz {PFSENSE_ISO_URL}"
|
||||
)
|
||||
if retcode:
|
||||
raise ProvisioningError(
|
||||
f"Failed to download pfSense image from {PFSENSE_ISO_URL}"
|
||||
)
|
||||
|
||||
# Extract pfSense image file under temporary target directory
|
||||
print(f"Extracting pfSense ISO image")
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"gzip --decompress {packer_temp_dir}/dl/pfsense.iso.gz"
|
||||
)
|
||||
if retcode:
|
||||
raise ProvisioningError("Failed to extract pfSense ISO image")
|
||||
|
||||
# Download Packer to temporary target directory
|
||||
print(f"Downloading Packer from {PACKER_URL}")
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"wget --output-document={packer_temp_dir}/packer.zip {PACKER_URL}"
|
||||
)
|
||||
if retcode:
|
||||
raise ProvisioningError(f"Failed to download Packer from {PACKER_URL}")
|
||||
|
||||
# Extract Packer under temporary target directory
|
||||
print(f"Extracting Packer binary")
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"unzip {packer_temp_dir}/packer.zip -d {packer_temp_dir}"
|
||||
)
|
||||
if retcode:
|
||||
raise ProvisioningError("Failed to extract Packer binary")
|
||||
|
||||
# Output the Packer configuration
|
||||
print(f"Generating Packer configurations")
|
||||
first_volume = self.vm_data["volumes"][0]
|
||||
first_volume_size_mb = int(first_volume["disk_size_gb"]) * 1024
|
||||
|
||||
builder = {
|
||||
"builders": [
|
||||
{
|
||||
"type": "qemu",
|
||||
"vm_name": self.vm_name,
|
||||
"accelerator": "kvm",
|
||||
"memory": 1024,
|
||||
"headless": True,
|
||||
"disk_interface": "virtio",
|
||||
"disk_size": first_volume_size_mb,
|
||||
"format": "raw",
|
||||
"net_device": "virtio-net",
|
||||
"communicator": "none",
|
||||
"http_port_min": "8100",
|
||||
"http_directory": f"{packer_temp_dir}/http",
|
||||
"output_directory": f"{packer_temp_dir}/bin",
|
||||
"iso_urls": [f"{packer_temp_dir}/dl/pfsense.iso"],
|
||||
"iso_checksum": "none",
|
||||
"boot_wait": "3s",
|
||||
"boot_command": [
|
||||
"1",
|
||||
"<wait90>",
|
||||
# Run through the installer
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
"<spacebar><enter>",
|
||||
"<wait1>",
|
||||
"<left><enter>",
|
||||
"<wait120>",
|
||||
"<enter>",
|
||||
"<wait1>",
|
||||
# Enter shell
|
||||
"<right><enter>",
|
||||
# Set up serial console
|
||||
"<wait1>",
|
||||
"echo '-S115200 -D' | tee /mnt/boot.config<enter>",
|
||||
"<wait1>",
|
||||
'sed -i.bak \'s/boot_serial="NO"/boot_serial="YES"/\' /mnt/boot/loader.conf<enter>',
|
||||
"<wait1>",
|
||||
"echo 'boot_multicons=\"YES\"' >> /mnt/boot/loader.conf<enter>",
|
||||
"<wait1>",
|
||||
"echo 'console=\"comconsole,vidconsole\"' >> /mnt/boot/loader.conf<enter>",
|
||||
"<wait1>",
|
||||
"echo 'comconsole_speed=\"115200\"' >> /mnt/boot/loader.conf<enter>",
|
||||
"<wait1>",
|
||||
"sed -i.bak '/^ttyu/s/off/on/' /mnt/etc/ttys<enter>",
|
||||
"<wait1>",
|
||||
# Grab template configuration from provisioner
|
||||
# We have to do DHCP first, then do the telnet fetch inside a chroot
|
||||
"dhclient vtnet0<enter>",
|
||||
"<wait5>"
|
||||
"chroot /mnt<enter>"
|
||||
"<wait1>"
|
||||
"telnet {{ .HTTPIP }} {{ .HTTPPort }} | sed '1,/^$/d' | tee /cf/conf/config.xml<enter>",
|
||||
"GET /config.xml HTTP/1.0<enter><enter>",
|
||||
"<wait1>",
|
||||
"passwd root<enter>",
|
||||
"opnsense<enter>",
|
||||
"opnsense<enter>",
|
||||
"<wait1>",
|
||||
"exit<enter>",
|
||||
"<wait1>"
|
||||
# Shut down to complete provisioning
|
||||
"poweroff<enter>",
|
||||
],
|
||||
}
|
||||
],
|
||||
"provisioners": [],
|
||||
"post-processors": [],
|
||||
}
|
||||
|
||||
with open(f"{packer_temp_dir}/build.json", "w") as fh:
|
||||
json.dump(builder, fh)
|
||||
|
||||
# Set the hostname and domain if vm_fqdn is set
|
||||
if self.vm_data["script_arguments"].get("vm_fqdn") is not None:
|
||||
pfsense_hostname = self.vm_data["script_arguments"]["vm_fqdn"].split(".")[0]
|
||||
pfsense_domain = ".".join(
|
||||
self.vm_data["script_arguments"]["vm_fqdn"].split(".")[1:]
|
||||
)
|
||||
else:
|
||||
pfsense_hostname = self.vm_name
|
||||
pfsense_domain = ""
|
||||
|
||||
# Output the pfSense configuration
|
||||
# This is a default configuration with the serial console enabled and with our WAN
|
||||
# interface pre-configured via the provided script arguments.
|
||||
pfsense_config = """<?xml version="1.0"?>
|
||||
<pfsense>
|
||||
<version>21.7</version>
|
||||
<lastchange></lastchange>
|
||||
<system>
|
||||
<optimization>normal</optimization>
|
||||
<hostname>{pfsense_hostname}</hostname>
|
||||
<domain>{pfsense_domain}</domain>
|
||||
<dnsserver></dnsserver>
|
||||
<dnsallowoverride></dnsallowoverride>
|
||||
<group>
|
||||
<name>all</name>
|
||||
<description><![CDATA[All Users]]></description>
|
||||
<scope>system</scope>
|
||||
<gid>1998</gid>
|
||||
<member>0</member>
|
||||
</group>
|
||||
<group>
|
||||
<name>admins</name>
|
||||
<description><![CDATA[System Administrators]]></description>
|
||||
<scope>system</scope>
|
||||
<gid>1999</gid>
|
||||
<member>0</member>
|
||||
<priv>page-all</priv>
|
||||
</group>
|
||||
<user>
|
||||
<name>admin</name>
|
||||
<descr><![CDATA[System Administrator]]></descr>
|
||||
<scope>system</scope>
|
||||
<groupname>admins</groupname>
|
||||
<bcrypt-hash>$2b$10$13u6qwCOwODv34GyCMgdWub6oQF3RX0rG7c3d3X4JvzuEmAXLYDd2</bcrypt-hash>
|
||||
<uid>0</uid>
|
||||
<priv>user-shell-access</priv>
|
||||
</user>
|
||||
<nextuid>2000</nextuid>
|
||||
<nextgid>2000</nextgid>
|
||||
<timeservers>2.pfsense.pool.ntp.org</timeservers>
|
||||
<webgui>
|
||||
<protocol>http</protocol>
|
||||
<loginautocomplete></loginautocomplete>
|
||||
<port></port>
|
||||
<max_procs>2</max_procs>
|
||||
</webgui>
|
||||
<disablenatreflection>yes</disablenatreflection>
|
||||
<disablesegmentationoffloading></disablesegmentationoffloading>
|
||||
<disablelargereceiveoffloading></disablelargereceiveoffloading>
|
||||
<ipv6allow></ipv6allow>
|
||||
<maximumtableentries>400000</maximumtableentries>
|
||||
<powerd_ac_mode>hadp</powerd_ac_mode>
|
||||
<powerd_battery_mode>hadp</powerd_battery_mode>
|
||||
<powerd_normal_mode>hadp</powerd_normal_mode>
|
||||
<bogons>
|
||||
<interval>monthly</interval>
|
||||
</bogons>
|
||||
<hn_altq_enable></hn_altq_enable>
|
||||
<already_run_config_upgrade></already_run_config_upgrade>
|
||||
<ssh>
|
||||
<enable>enabled</enable>
|
||||
</ssh>
|
||||
<enableserial></enableserial>
|
||||
<serialspeed>115200</serialspeed>
|
||||
<primaryconsole>serial</primaryconsole>
|
||||
<sshguard_threshold></sshguard_threshold>
|
||||
<sshguard_blocktime></sshguard_blocktime>
|
||||
<sshguard_detection_time></sshguard_detection_time>
|
||||
<sshguard_whitelist></sshguard_whitelist>
|
||||
</system>
|
||||
""".format(
|
||||
pfsense_hostname=pfsense_hostname,
|
||||
pfsense_domain=pfsense_domain,
|
||||
)
|
||||
|
||||
if self.vm_data["script_arguments"].get("pfsense_wan_dhcp") is not None:
|
||||
pfsense_config += """
|
||||
<interfaces>
|
||||
<wan>
|
||||
<enable></enable>
|
||||
<if>{wan_iface}</if>
|
||||
<mtu></mtu>
|
||||
<ipaddr>dhcp</ipaddr>
|
||||
<ipaddrv6>slaac</ipaddrv6>
|
||||
<subnet></subnet>
|
||||
<gateway></gateway>
|
||||
<blockbogons></blockbogons>
|
||||
<dhcphostname></dhcphostname>
|
||||
<media></media>
|
||||
<mediaopt></mediaopt>
|
||||
<dhcp6-duid></dhcp6-duid>
|
||||
<dhcp6-ia-pd-len>0</dhcp6-ia-pd-len>
|
||||
</wan>
|
||||
</interfaces>
|
||||
<gateways>
|
||||
</gateways>
|
||||
""".format(
|
||||
wan_iface=self.vm_data["script_arguments"]["pfsense_wan_iface"],
|
||||
)
|
||||
else:
|
||||
pfsense_config += """
|
||||
<interfaces>
|
||||
<wan>
|
||||
<enable></enable>
|
||||
<if>{wan_iface}</if>
|
||||
<mtu></mtu>
|
||||
<ipaddr>{wan_ipaddr}</ipaddr>
|
||||
<ipaddrv6>slaac</ipaddrv6>
|
||||
<subnet>{wan_netmask}</subnet>
|
||||
<gateway>WAN</gateway>
|
||||
<blockbogons></blockbogons>
|
||||
<dhcphostname></dhcphostname>
|
||||
<media></media>
|
||||
<mediaopt></mediaopt>
|
||||
<dhcp6-duid></dhcp6-duid>
|
||||
<dhcp6-ia-pd-len>0</dhcp6-ia-pd-len>
|
||||
</wan>
|
||||
</interfaces>
|
||||
<gateways>
|
||||
<gateway_item>
|
||||
<interface>wan</interface>
|
||||
<gateway>{wan_gateway}</gateway>
|
||||
<name>WAN</name>
|
||||
<weight>1</weight>
|
||||
<ipprotocol>inet</ipprotocol>
|
||||
<descr/>
|
||||
</gateway_item>
|
||||
</gateways>
|
||||
""".format(
|
||||
wan_iface=self.vm_data["script_arguments"]["pfsense_wan_iface"],
|
||||
wan_ipaddr=self.vm_data["script_arguments"][
|
||||
"pfsense_wan_address"
|
||||
].split("/")[0],
|
||||
wan_netmask=self.vm_data["script_arguments"][
|
||||
"pfsense_wan_address"
|
||||
].split("/")[1],
|
||||
wan_gateway=self.vm_data["script_arguments"]["pfsense_wan_gateway"],
|
||||
)
|
||||
|
||||
pfsense_config += """
|
||||
<staticroutes></staticroutes>
|
||||
<dhcpd></dhcpd>
|
||||
<dhcpdv6></dhcpdv6>
|
||||
<snmpd>
|
||||
<syslocation></syslocation>
|
||||
<syscontact></syscontact>
|
||||
<rocommunity>public</rocommunity>
|
||||
</snmpd>
|
||||
<diag>
|
||||
<ipv6nat>
|
||||
<ipaddr></ipaddr>
|
||||
</ipv6nat>
|
||||
</diag>
|
||||
<syslog>
|
||||
<filterdescriptions>1</filterdescriptions>
|
||||
</syslog>
|
||||
<filter>
|
||||
<rule>
|
||||
<type>pass</type>
|
||||
<ipprotocol>inet</ipprotocol>
|
||||
<descr><![CDATA[Default allow LAN to any rule]]></descr>
|
||||
<interface>lan</interface>
|
||||
<tracker>0100000101</tracker>
|
||||
<source>
|
||||
<network>lan</network>
|
||||
</source>
|
||||
<destination>
|
||||
<any></any>
|
||||
</destination>
|
||||
</rule>
|
||||
<rule>
|
||||
<type>pass</type>
|
||||
<ipprotocol>inet6</ipprotocol>
|
||||
<descr><![CDATA[Default allow LAN IPv6 to any rule]]></descr>
|
||||
<interface>lan</interface>
|
||||
<tracker>0100000102</tracker>
|
||||
<source>
|
||||
<network>lan</network>
|
||||
</source>
|
||||
<destination>
|
||||
<any></any>
|
||||
</destination>
|
||||
</rule>
|
||||
<rule>
|
||||
<type>pass</type>
|
||||
<ipprotocol>inet</ipprotocol>
|
||||
<descr><![CDATA[Default allow WAN to any rule - REMOVE ME AFTER CREATING LAN/OTHER WAN RULES]]></descr>
|
||||
<interface>wan</interface>
|
||||
<tracker>0100000103</tracker>
|
||||
<source>
|
||||
<network>wan</network>
|
||||
</source>
|
||||
<destination>
|
||||
<any></any>
|
||||
</destination>
|
||||
</rule>
|
||||
<rule>
|
||||
<type>pass</type>
|
||||
<ipprotocol>inet6</ipprotocol>
|
||||
<descr><![CDATA[Default allow WAN IPv6 to any rule - REMOVE ME AFTER CREATING LAN/OTHER WAN RULES]]></descr>
|
||||
<interface>wan</interface>
|
||||
<tracker>0100000104</tracker>
|
||||
<source>
|
||||
<network>wan</network>
|
||||
</source>
|
||||
<destination>
|
||||
<any></any>
|
||||
</destination>
|
||||
</rule>
|
||||
</filter>
|
||||
<ipsec>
|
||||
<vtimaps></vtimaps>
|
||||
</ipsec>
|
||||
<aliases></aliases>
|
||||
<proxyarp></proxyarp>
|
||||
<cron>
|
||||
<item>
|
||||
<minute>*/1</minute>
|
||||
<hour>*</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/sbin/newsyslog</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>1</minute>
|
||||
<hour>3</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/etc/rc.periodic daily</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>15</minute>
|
||||
<hour>4</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>6</wday>
|
||||
<who>root</who>
|
||||
<command>/etc/rc.periodic weekly</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>30</minute>
|
||||
<hour>5</hour>
|
||||
<mday>1</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/etc/rc.periodic monthly</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>1,31</minute>
|
||||
<hour>0-5</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 adjkerntz -a</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>1</minute>
|
||||
<hour>3</hour>
|
||||
<mday>1</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 /etc/rc.update_bogons.sh</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>1</minute>
|
||||
<hour>1</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 /etc/rc.dyndns.update</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>*/60</minute>
|
||||
<hour>*</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 /usr/local/sbin/expiretable -v -t 3600 virusprot</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>30</minute>
|
||||
<hour>12</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 /etc/rc.update_urltables</command>
|
||||
</item>
|
||||
<item>
|
||||
<minute>1</minute>
|
||||
<hour>0</hour>
|
||||
<mday>*</mday>
|
||||
<month>*</month>
|
||||
<wday>*</wday>
|
||||
<who>root</who>
|
||||
<command>/usr/bin/nice -n20 /etc/rc.update_pkg_metadata</command>
|
||||
</item>
|
||||
</cron>
|
||||
<wol></wol>
|
||||
<rrd>
|
||||
<enable></enable>
|
||||
</rrd>
|
||||
<widgets>
|
||||
<sequence>system_information:col1:show,netgate_services_and_support:col2:show,interfaces:col2:show</sequence>
|
||||
<period>10</period>
|
||||
</widgets>
|
||||
<openvpn></openvpn>
|
||||
<dnshaper></dnshaper>
|
||||
<unbound>
|
||||
<enable></enable>
|
||||
<dnssec></dnssec>
|
||||
<active_interface></active_interface>
|
||||
<outgoing_interface></outgoing_interface>
|
||||
<custom_options></custom_options>
|
||||
<hideidentity></hideidentity>
|
||||
<hideversion></hideversion>
|
||||
<dnssecstripped></dnssecstripped>
|
||||
</unbound>
|
||||
<ppps></ppps>
|
||||
<shaper></shaper>
|
||||
</pfsense>
|
||||
"""
|
||||
|
||||
with open(f"{packer_temp_dir}/http/config.xml", "w") as fh:
|
||||
fh.write(pfsense_config)
|
||||
|
||||
# Create the disk(s)
|
||||
print(f"Creating volumes")
|
||||
for volume in self.vm_data["volumes"]:
|
||||
with open_zk(config) as zkhandler:
|
||||
success, message = pvc_ceph.add_volume(
|
||||
zkhandler,
|
||||
volume["pool"],
|
||||
f"{self.vm_name}_{volume['disk_id']}",
|
||||
f"{volume['disk_size_gb']}G",
|
||||
)
|
||||
print(message)
|
||||
if not success:
|
||||
raise ProvisioningError(
|
||||
f"Failed to create volume '{volume['disk_id']}'."
|
||||
)
|
||||
|
||||
# Map the target RBD volumes
|
||||
print(f"Mapping volumes")
|
||||
for volume in self.vm_data["volumes"]:
|
||||
dst_volume_name = f"{self.vm_name}_{volume['disk_id']}"
|
||||
dst_volume = f"{volume['pool']}/{dst_volume_name}"
|
||||
|
||||
with open_zk(config) as zkhandler:
|
||||
success, message = pvc_ceph.map_volume(
|
||||
zkhandler,
|
||||
volume["pool"],
|
||||
dst_volume_name,
|
||||
)
|
||||
print(message)
|
||||
if not success:
|
||||
raise ProvisioningError(f"Failed to map volume '{dst_volume}'.")
|
||||
|
||||
def install(self):
|
||||
"""
|
||||
install(): Perform the installation
|
||||
"""
|
||||
|
||||
# Run any imports first
|
||||
import os
|
||||
import time
|
||||
|
||||
packer_temp_dir = "/tmp/packer"
|
||||
|
||||
print(
|
||||
f"Running Packer: PACKER_LOG=1 PACKER_CONFIG_DIR={packer_temp_dir} PACKER_CACHE_DIR={packer_temp_dir} {packer_temp_dir}/packer build {packer_temp_dir}/build.json"
|
||||
)
|
||||
os.system(
|
||||
f"PACKER_LOG=1 PACKER_CONFIG_DIR={packer_temp_dir} PACKER_CACHE_DIR={packer_temp_dir} {packer_temp_dir}/packer build {packer_temp_dir}/build.json"
|
||||
)
|
||||
|
||||
if not os.path.exists(f"{packer_temp_dir}/bin/{self.vm_name}"):
|
||||
raise ProvisioningError("Packer failed to build output image")
|
||||
|
||||
print("Copying output image to first volume")
|
||||
first_volume = self.vm_data["volumes"][0]
|
||||
dst_volume_name = f"{self.vm_name}_{first_volume['disk_id']}"
|
||||
dst_volume = f"{first_volume['pool']}/{dst_volume_name}"
|
||||
os.system(
|
||||
f"dd if={packer_temp_dir}/bin/{self.vm_name} of=/dev/rbd/{dst_volume} bs=1M status=progress"
|
||||
)
|
||||
|
||||
def cleanup(self):
|
||||
"""
|
||||
cleanup(): Perform any cleanup required due to prepare()/install()
|
||||
|
||||
This function is also called if there is ANY exception raised in the prepare()
|
||||
or install() steps. While this doesn't mean you shouldn't or can't raise exceptions
|
||||
here, be warned that doing so might cause loops. Do this only if you really need to.
|
||||
"""
|
||||
|
||||
# Run any imports first
|
||||
from pvcapid.vmbuilder import open_zk
|
||||
from pvcapid.Daemon import config
|
||||
import daemon_lib.ceph as pvc_ceph
|
||||
|
||||
# Use this construct for reversing the list, as the normal reverse() messes with the list
|
||||
for volume in list(reversed(self.vm_data["volumes"])):
|
||||
dst_volume_name = f"{self.vm_name}_{volume['disk_id']}"
|
||||
dst_volume = f"{volume['pool']}/{dst_volume_name}"
|
||||
mapped_dst_volume = f"/dev/rbd/{dst_volume}"
|
||||
|
||||
# Unmap volume
|
||||
with open_zk(config) as zkhandler:
|
||||
success, message = pvc_ceph.unmap_volume(
|
||||
zkhandler,
|
||||
volume["pool"],
|
||||
dst_volume_name,
|
||||
)
|
@@ -27,7 +27,7 @@ from ssl import SSLContext, TLSVersion
|
||||
from distutils.util import strtobool as dustrtobool
|
||||
|
||||
# Daemon version
|
||||
version = "0.9.55"
|
||||
version = "0.9.56"
|
||||
|
||||
# API version
|
||||
API_VERSION = 1.0
|
||||
|
@@ -168,14 +168,16 @@ def delete_ova(zkhandler, name):
|
||||
|
||||
@ZKConnection(config)
|
||||
def upload_ova(zkhandler, pool, name, ova_size):
|
||||
# Check that we have a default_ova provisioning script
|
||||
_, retcode = provisioner.list_script("default_ova", is_fuzzy=False)
|
||||
if retcode != "200":
|
||||
output = {
|
||||
"message": "Did not find a 'default_ova' provisioning script. Please add one with that name, either the example from '/usr/share/pvc/provisioner/examples/script/2-ova.py' or a custom one, before uploading OVAs."
|
||||
}
|
||||
retcode = 400
|
||||
return output, retcode
|
||||
# Check that we have an ova or default_ova provisioning script
|
||||
_, retcode = provisioner.list_script("ova", is_fuzzy=False)
|
||||
if retcode != 200:
|
||||
_, retcode = provisioner.list_script("default_ova", is_fuzzy=False)
|
||||
if retcode != 200:
|
||||
output = {
|
||||
"message": "Did not find an 'ova' or 'default_ova' provisioning script. Please add one with one of those names, either the example from '/usr/share/pvc/provisioner/examples/script/2-ova.py' or a custom one, before uploading OVAs."
|
||||
}
|
||||
retcode = 400
|
||||
return output, retcode
|
||||
|
||||
ova_archive = None
|
||||
|
||||
|
@@ -447,13 +447,18 @@ def create_vm(
|
||||
# Verify that every specified filesystem is valid
|
||||
used_filesystems = list()
|
||||
for volume in vm_data["volumes"]:
|
||||
if volume["source_volume"] is not None:
|
||||
if volume.get("source_volume") is not None:
|
||||
continue
|
||||
if volume["filesystem"] and volume["filesystem"] not in used_filesystems:
|
||||
if (
|
||||
volume.get("filesystem") is not None
|
||||
and volume["filesystem"] not in used_filesystems
|
||||
):
|
||||
used_filesystems.append(volume["filesystem"])
|
||||
|
||||
for filesystem in used_filesystems:
|
||||
if filesystem == "swap":
|
||||
if filesystem is None or filesystem == "None":
|
||||
continue
|
||||
elif filesystem == "swap":
|
||||
retcode, stdout, stderr = pvc_common.run_os_command("which mkswap")
|
||||
if retcode:
|
||||
raise ProvisioningError(
|
||||
@@ -555,6 +560,15 @@ def create_vm(
|
||||
f"Failed to mount sysfs onto {temp_dir}/sys for chroot: {stderr}"
|
||||
)
|
||||
|
||||
# Bind mount /proc to the chroot location /proc
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"mount --bind --options rw /proc {temp_dir}/proc"
|
||||
)
|
||||
if retcode:
|
||||
raise ProvisioningError(
|
||||
f"Failed to mount procfs onto {temp_dir}/proc for chroot: {stderr}"
|
||||
)
|
||||
|
||||
print("Chroot environment prepared successfully")
|
||||
|
||||
def general_cleanup():
|
||||
@@ -573,6 +587,10 @@ def create_vm(
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"umount {temp_dir}/sys"
|
||||
)
|
||||
# Unmount bind-mounted procfs on the chroot
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"umount {temp_dir}/proc"
|
||||
)
|
||||
# Unmount bind-mounted tmpfs on the chroot
|
||||
retcode, stdout, stderr = pvc_common.run_os_command(
|
||||
f"umount {temp_dir}/tmp"
|
||||
|
@@ -2,7 +2,7 @@ from setuptools import setup
|
||||
|
||||
setup(
|
||||
name="pvc",
|
||||
version="0.9.55",
|
||||
version="0.9.56",
|
||||
packages=["pvc", "pvc.cli_lib"],
|
||||
install_requires=[
|
||||
"Click",
|
||||
|
10
debian/changelog
vendored
10
debian/changelog
vendored
@@ -1,3 +1,13 @@
|
||||
pvc (0.9.56-0) unstable; urgency=high
|
||||
|
||||
* [API/Provisioner] Fundamentally revamps the provisioner script framework to provide more extensibility (BREAKING CHANGE)
|
||||
* [API/Provisioner] Adds example provisioner scripts for noop, ova, debootstrap, rinse, and pfsense (BREAKING CHANGE)
|
||||
* [API/Provisioner] Enforces the use of the ova provisioner script during new OVA uploads; existing uploads will not work (BREAKING CHANGE)
|
||||
* [Documentation] Updates the documentation around provisioner scripts and OVAs to reflect the above changes
|
||||
* [Node] Adds a new pvcautoready.service oneshot unit to replicate the on-boot-ready functionality of old pvc-flush.service unit
|
||||
|
||||
-- Joshua M. Boniface <joshua@boniface.me> Thu, 27 Oct 2022 14:19:18 -0400
|
||||
|
||||
pvc (0.9.55-0) unstable; urgency=high
|
||||
|
||||
* Fixes a problem with the literal eval handler in the provisioner (again)
|
||||
|
1
debian/pvc-daemon-node.install
vendored
1
debian/pvc-daemon-node.install
vendored
@@ -3,4 +3,5 @@ node-daemon/pvcnoded.sample.yaml etc/pvc
|
||||
node-daemon/pvcnoded usr/share/pvc
|
||||
node-daemon/pvcnoded.service lib/systemd/system
|
||||
node-daemon/pvc.target lib/systemd/system
|
||||
node-daemon/pvcautoready.service lib/systemd/system
|
||||
node-daemon/monitoring usr/share/pvc
|
||||
|
1
debian/pvc-daemon-node.postinst
vendored
1
debian/pvc-daemon-node.postinst
vendored
@@ -5,6 +5,7 @@ systemctl daemon-reload
|
||||
|
||||
# Enable the service and target
|
||||
systemctl enable /lib/systemd/system/pvcnoded.service
|
||||
systemctl enable /lib/systemd/system/pvcautoready.service
|
||||
systemctl enable /lib/systemd/system/pvc.target
|
||||
|
||||
# Inform administrator of the service restart/startup not occurring automatically
|
||||
|
@@ -173,9 +173,9 @@ basic-ssh 11 Content-Type: text/cloud-config; charset="us-ascii"
|
||||
|
||||
## Provisioning Scripts
|
||||
|
||||
The PVC provisioner provides a scripting framework in order to automate VM installation. This is generally the most useful with UNIX-like systems which can be installed over the network via shell scripts. For instance, the script might install a Debian VM using `debootstrap` or a Red Hat VM using `rpmstrap`. The PVC Ansible system will automatically install `debootstrap` on coordinator nodes, to allow out-of-the-box deployment of Debian-based VMs with `debootstrap` and the example script shipped with PVC (see below); any other deployment tool must be installed separately onto all PVC coordinator nodes, or fetched by the script itself (with caveats as noted below).
|
||||
The PVC provisioner provides a scripting framework in order to automate VM installation. This is generally the most useful with UNIX-like systems which can be installed over the network via shell scripts. For instance, the script might install a Debian VM using `debootstrap`, which is automatically installed by default. However all deployment profiles require some provisioning script, minimally to craft their Libvirt configuration.
|
||||
|
||||
Several example scripts are provided in the `/usr/share/pvc/provisioner/examples/scripts` directory of all PVC hypervisors, and these are imported by the provisioner system by default on install to help get you started. You are of course free to modify or extend these as you wish, or write your own based on them to suit your needs.
|
||||
Several example scripts are provided in the `/usr/share/pvc/provisioner/examples/scripts` directory of all PVC hypervisors. These can be imported into the provisioner system as-is to help get you started, or you are of course free to modify or extend these as you wish, or write your own based on them to suit your needs.
|
||||
|
||||
Provisioner scripts are written in Python 3 and are implemented as a class, `VMBuilderScript`, which extends the built-in `VMBuilder` class, for example:
|
||||
|
||||
@@ -368,7 +368,7 @@ af1d0682-53e8-4141-982f-f672e2f23261 active celery@pvchv1 test1 std-lar
|
||||
The `--wait` option can be given to the create command. This will cause the command to block and providing a visual progress indicator while the provisioning occurs.
|
||||
|
||||
```
|
||||
$ pvc provisioner create test2 std-large
|
||||
$ pvc provisioner create --wait test2 std-large
|
||||
Using cluster "local" - Host: "10.0.0.1:7370" Scheme: "http" Prefix: "/api/v1"
|
||||
|
||||
Task ID: 94abb7fe-41f5-42be-b984-de92854f4b3f
|
||||
@@ -418,7 +418,7 @@ Once the OVA is uploaded to the cluster with the `pvc provisioner ova upload` co
|
||||
|
||||
## The OVA Provisioner Script
|
||||
|
||||
OVA installs leverage a special provisioner script to handle the VM creation, identical to any other provisioner profile type. This (example) script is installed by default and used by OVAs by default, though the script that an individual OVA profile uses can be modified as required.
|
||||
OVA installs leverage a special provisioner script to handle the VM creation, identical to any other provisioner profile type. This (example) script, or a replacement, must be installed prior to uploading an OVA, and handles the actual VM configuration creation and cloning of the OVA volumes.
|
||||
|
||||
## OVA limitations
|
||||
|
||||
|
19
node-daemon/pvcautoready.service
Normal file
19
node-daemon/pvcautoready.service
Normal file
@@ -0,0 +1,19 @@
|
||||
# Parallel Virtual Cluster autoready oneshot
|
||||
|
||||
[Unit]
|
||||
Description = Parallel Virtual Cluster autoready oneshot
|
||||
After = pvcnoded.service pvcapid.service zookeeper.service libvirtd.service ssh.service ceph.target network-online.target
|
||||
Wants = pvcnoded.service pvcapid.service
|
||||
PartOf = pvc.target
|
||||
ConditionPathExists=/etc/pvc/autoready
|
||||
|
||||
[Service]
|
||||
Type = oneshot
|
||||
RemainAfterExit = false
|
||||
WorkingDirectory = /usr/share/pvc
|
||||
TimeoutSec = 31min
|
||||
ExecStartPre = /bin/sleep 60
|
||||
ExecStart = /usr/bin/pvc -c local node ready --wait
|
||||
|
||||
[Install]
|
||||
WantedBy = pvc.target
|
@@ -48,7 +48,7 @@ import re
|
||||
import json
|
||||
|
||||
# Daemon version
|
||||
version = "0.9.55"
|
||||
version = "0.9.56"
|
||||
|
||||
|
||||
##########################################################
|
||||
|
Reference in New Issue
Block a user