Add additional info about OVA deployment

This commit is contained in:
Joshua Boniface 2020-03-15 17:31:12 -04:00
parent 4fe3a73980
commit 616d7c43ed
4 changed files with 18 additions and 121 deletions

View File

@ -32,7 +32,7 @@ Within each node, the PVC daemon is a single Python 3 program which handles all
The daemon uses an object-oriented approach, with most cluster objects being represented by class objects of a specific type. Each node has a full view of all cluster objects and can interact with them based on events from the cluster as needed. The daemon uses an object-oriented approach, with most cluster objects being represented by class objects of a specific type. Each node has a full view of all cluster objects and can interact with them based on events from the cluster as needed.
Further information about the node daemon architecture can be found at the [daemon architecture page](/architecture/daemon). Further information about the node daemon manual can be found at the [daemon manual page](/manuals/daemon).
## Client Architecture ## Client Architecture
@ -58,9 +58,7 @@ The CLI client is self-documenting using the `-h`/`--help` arguments, though a s
The overall management, deployment, bootstrapping, and configuring of nodes is accomplished via a set of Ansible roles, found in the [`pvc-ansible` repository](https://github.com/parallelvirtualcluster/pvc-ansible), and nodes are installed via a custom installer ISO generated by the [`pvc-installer` repository](https://github.com/parallelvirtualcluster/pvc-installer). Once the cluster is set up, nodes can be added, replaced, or updated using this Ansible framework. The overall management, deployment, bootstrapping, and configuring of nodes is accomplished via a set of Ansible roles, found in the [`pvc-ansible` repository](https://github.com/parallelvirtualcluster/pvc-ansible), and nodes are installed via a custom installer ISO generated by the [`pvc-installer` repository](https://github.com/parallelvirtualcluster/pvc-installer). Once the cluster is set up, nodes can be added, replaced, or updated using this Ansible framework.
Further information about the Ansible deployment architecture can be found at the [Ansible architecture page](/architecture/ansible). The Ansible configuration and architecture manual can be found at the [Ansible manual page](/manuals/ansible).
The Ansible configuration manual can be found at the [Ansible manual page](/manuals/ansible).
## About the author ## About the author

View File

@ -11,11 +11,11 @@
PVC is a KVM+Ceph-based, Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. It is built from the ground-up to be redundant at the host layer, allowing the cluster to gracefully handle the loss of nodes or their components, both due to hardware failure or due to maintenance. It is able to scale from a minimum of 3 nodes up to 12 or more nodes, while retaining performance and flexibility, allowing the administrator to build a small cluster today and grow it as needed. PVC is a KVM+Ceph-based, Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. It is built from the ground-up to be redundant at the host layer, allowing the cluster to gracefully handle the loss of nodes or their components, both due to hardware failure or due to maintenance. It is able to scale from a minimum of 3 nodes up to 12 or more nodes, while retaining performance and flexibility, allowing the administrator to build a small cluster today and grow it as needed.
The major goal of PVC is to be administrator friendly, providing the power of Enterprise-grade private clouds like OpenStack, Nutanix, and VMWare to homelabbers, SMBs, and small ISPs, without the cost or complexity. It believes in picking the best tool for a job and abstracting it behind the cluster as a whole, freeing the administrator from the boring and time-consuming task of selecting the best component, and letting them get on with the things that really matter. Administration can be done from a simple CLI or via a RESTful API capable of building full-featured web frontends or additional applications, taking a self-documenting approach to keep the administrator learning curvet as low as possible. Setup is easy and straightforward with an [ISO-based node installer](https://git.bonifacelabs.ca/parallelvirtualcluster/pvc-installer) and [Ansible role framework](https://git.bonifacelabs.ca/parallelvirtualcluster/pvc-ansible) designed to get a cluster up and running as quickly as possible. Build your cloud in an hour, grow it as you need, and never worry about it: just add physical servers. The major goal of PVC is to be administrator friendly, providing the power of Enterprise-grade private clouds like OpenStack, Nutanix, and VMWare to homelabbers, SMBs, and small ISPs, without the cost or complexity. It believes in picking the best tool for a job and abstracting it behind the cluster as a whole, freeing the administrator from the boring and time-consuming task of selecting the best component, and letting them get on with the things that really matter. Administration can be done from a simple CLI or via a RESTful API capable of building full-featured web frontends or additional applications, taking a self-documenting approach to keep the administrator learning curvet as low as possible. Setup is easy and straightforward with an [ISO-based node installer](https://github.com/parallelvirtualcluster/pvc-installer) and [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) designed to get a cluster up and running as quickly as possible. Build your cloud in an hour, grow it as you need, and never worry about it: just add physical servers.
## Getting Started ## Getting Started
To get started with PVC, read the [Cluster Architecture document](/architecture/cluster), then see [Installing](/installing) for details on setting up a set of PVC nodes, using [`pvc-ansible`](/manuals/ansible) to configure and bootstrap a cluster, and managing it with the [`pvc` cli](/manuals/cli) or [HTTP API](/manuals/api). For details on the project, its motivation, and architectural details, see [the About page](/about). To get started with PVC, read the [Cluster Architecture document](/architecture/cluster), then see [Installing](/installing) for details on setting up the initial PVC nodes, using [`pvc-ansible`](/manuals/ansible) to configure and bootstrap a cluster, and managing it with the [`pvc` cli](/manuals/cli) or [HTTP API](/manuals/api). For details on the project, its motivation, and architectural details, see [the About page](/about).
## Changelog ## Changelog

View File

@ -6,6 +6,8 @@ This guide will walk you through setting up a simple 3-node PVC cluster from scr
### Part One - Preparing for bootstrap ### Part One - Preparing for bootstrap
0. Read through the [Cluster Architecture documentation](/architecture/cluster). This documentation details the requirements and conventions of a PVC cluster, and is important to understand before proceeding.
0. Download the latest copy of the [`pvc-installer`](https://github.com/parallelvirtualcluster/pvc-installer) and [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) repositories to your local machine. 0. Download the latest copy of the [`pvc-installer`](https://github.com/parallelvirtualcluster/pvc-installer) and [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) repositories to your local machine.
0. In `pvc-ansible`, create an initial `hosts` inventory, using `hosts.default` as a template. You can manage multiple PVC clusters ("sites") from the Ansible repository easily, however for simplicity you can use the simple name `cluster` for your initial site. Define the 3 hostnames you will use under the site group; usually the provided names of `pvchv1`, `pvchv2`, and `pvchv3` are sufficient, though you may use any hostname pattern you wish. It is *very important* that the names all contain a sequential number, however, as this is used by various components. 0. In `pvc-ansible`, create an initial `hosts` inventory, using `hosts.default` as a template. You can manage multiple PVC clusters ("sites") from the Ansible repository easily, however for simplicity you can use the simple name `cluster` for your initial site. Define the 3 hostnames you will use under the site group; usually the provided names of `pvchv1`, `pvchv2`, and `pvchv3` are sufficient, though you may use any hostname pattern you wish. It is *very important* that the names all contain a sequential number, however, as this is used by various components.
@ -124,122 +126,11 @@ All steps in this and following sections can be performed using either the CLI c
0. Verify the client networks are reachable by pinging the managed gateway from outside the cluster. 0. Verify the client networks are reachable by pinging the managed gateway from outside the cluster.
### Part Six - Setting nodes ready and deploying a VM
This section walks through deploying a simple Debian VM to the cluster with Debootstrap. Note that as of PVC version `0.5`, this is still a manual process, though automated deployment of VMs based on configuration templates and image snapshots is planned for version `0.6`. This section can be used as a basis for a scripted installer, or a manual process as the administrator sees fit.
0. Set all 3 nodes to `ready` state, allowing them to run virtual machines. The general command is: 0. Set all 3 nodes to `ready` state, allowing them to run virtual machines. The general command is:
`$ pvc node ready <node>` `$ pvc node ready <node>`
0. Create an RBD image for the VM. The general command is: ### You're Done!
`$ pvc storage volume add <pool> <name> <size>`
For example, to create a 20GB disk for a VM called `test1` in the previously-configured pool `vms`, run the command as follows: Congratulations, you now have a basic PVC storage cluster, ready to run your VMs.
`$ pvc storage volume add vms test1_disk0 20G`
0. Verify the RBD image was created: For next steps, see the [Provisioner manual](/manuals/provisioner) for details on how to use the PVC provisioner to create new Virtual Machines, as well as the [CLI manual](/manuals/cli) and [API manual](/manuals/api) for details on day-to-day usage of PVC.
`$ pvc storage volume list`
0. On one of the PVC nodes, for example `pvchv1`, map the RBD volume to the local system:
`$ ceph rbd map vms/test1_disk0`
The resulting disk device will be available at `/dev/rbd/vms/test1_disk0` or `/dev/rbd0`.
0. Create a filesystem on the block device, for example `ext4`:
`$ mkfs -t ext4 /dev/rbd/vms/test1_disk0`
0. Create a temporary directory and mount the block device to it, using `mount` to find the directory:
`$ mount /dev/rbd/vms/test1_disk0 $( mktemp -d )`
`$ mount | grep rbd`
0. Run a `debootstrap` installation to the volume:
`$ debootstrap buster <temporary_mountpoint> http://ftp.mirror.debian.org/debian`
0. Bind mount the various required directories to the new system:
`$ mount --bind /dev <temporary_mountpoint>/dev`
`$ mount --bind /dev/pts <temporary_mountpoint>/dev/pts`
`$ mount --bind /proc <temporary_mountpoint>/proc`
`$ mount --bind /sys <temporary_mountpoint>/sys`
`$ mount --bind /run <temporary_mountpoint>/run`
0. Using `chroot`, configure the VM system as required, for instance installing packages or adding users:
`$ chroot <temporary_mountpoint>`
`[chroot]$ ...`
0. Install the GRUB bootloader in the VM system, and install Grub to the RBD device:
`[chroot]$ apt install grub-pc`
`[chroot]$ grub-install /dev/rbd/vms/test1_disk0`
0. Exit the `chroot` environment, unmount the temporary mountpoint, and unmap the RBD device:
`[chroot]$ exit`
`$ umount <temporary_mountpoint>`
`$ rbd unmap /dev/rd0`
0. Prepare a Libvirt XML configuration, obtaining the required Ceph storage secret and a new random VM UUID first. This example provides a very simple VM with 1 vCPU, 1GB RAM, the previously-configured network `100`, and the previously-configured disk `vms/test1_disk0`:
`$ virsh secret-list`
`$ uuidgen`
`$ $EDITOR /tmp/test1.xml`
```
<domain type='kvm'>
<name>test1</name>
<uuid>[INSERT GENERATED UUID]</uuid>
<description>Testing VM</description>
<memory unit='MiB'>1024</memory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<controller type='usb' index='0'/>
<controller type='pci' index='0' model='pci-root'/>
<serial type='pty'/>
<console type='pty'/>
<disk type='network' device='disk'>
<driver name='qemu' discard='unmap'/>
<auth username='libvirt'>
<secret type='ceph' uuid='[INSERT CEPH STORAGE SECRET]'/>
</auth>
<source protocol='rbd' name='vms/test1_disk0'>
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
<host name='[INSERT FIRST COORDINATOR CLUSTER NETWORK FQDN' port='6789'/>
</source>
<target dev='sda' bus='scsi'/>
</disk>
<interface type='bridge'>
<mac address='52:54:00:12:34:56'/>
<source bridge='vmbr100'/>
<model type='virtio'/>
</interface>
<controller type='scsi' index='0' model='virtio-scsi'/>
</devices>
</domain>
```
**NOTE:** This Libvirt XML is only a sample; it should be modified to fit the specifics of the VM. Alternatively to manual configuration, one can use a tool like `virt-manager` to generate valid Libvirt XML configurations for PVC to use.
0. Define the VM in the PVC cluster:
`$ pvc vm define /tmp/test1.xml`
0. Verify the VM is present in the cluster:
`$ pvc vm info test1`
0. Start the VM and watch the console log:
`$ pvc vm start test1`
`$ pvc vm log -f test1`
If all has gone well until this point, you should now be able to watch your new VM boot on the cluster, grab DHCP from the managed network, and run away doing its thing. You could now, for instance, move it permanently to another node with the `pvc vm move -t <node> test1` command, or temporarily with the `pvc vm migrate -t <node> test1` command and back again with the `pvc vm unmigrate test` command.
For more details on what to do next, see the [CLI manual](/manuals/cli) for a full list of management functions, SSH into your new VM, and start provisioning more. Your new private cloud is now here!

View File

@ -10,10 +10,18 @@ The purpose of the Provisioner API is to provide a convenient way for administra
The Provisioner allows the administrator to constuct descriptions of VMs, called profiles, which include system resource specifications, network interfaces, disks, cloud-init userdata, and installation scripts. These profiles are highly modular, allowing the administrator to specify arbitrary combinations of the mentioned VM features with which to build new VMs. The Provisioner allows the administrator to constuct descriptions of VMs, called profiles, which include system resource specifications, network interfaces, disks, cloud-init userdata, and installation scripts. These profiles are highly modular, allowing the administrator to specify arbitrary combinations of the mentioned VM features with which to build new VMs.
Currently, the provisioner supports creating VMs based off of installation scripts, or by cloning existing volumes. Future versions of PVC will allow the uploading of arbitrary images (either disk or ISO images) to cluster volumes, permitting even more flexibility in the installation of VMs. The provisioner supports creating VMs based off of installation scripts, by cloning existing volumes, and by uploading OVA image templates to the cluster.
Examples in the following sections use the CLI exclusively for demonstration purposes. For details of the underlying API calls, please see the [API interface reference](/manuals/api-reference.html). Examples in the following sections use the CLI exclusively for demonstration purposes. For details of the underlying API calls, please see the [API interface reference](/manuals/api-reference.html).
# Deploying VMs from OVA images
PVC supports deploying virtual machines from industry-standard OVA images. OVA images can be uploaded to the cluster with the `pvc provisioner ova` commands, and deployed via the created profile(s) using the `pvc provisioner create` command. Additionally, the profile(s) can be modified to suite your specific needs via the provisioner template system detailed below.
# Deploying VMs from provisioner scripts
PVC supports deploying virtual machines using administrator-provided scripts, using templates, profiles, and Cloud-init userdata to control the deployment process as desired. This deployment method permits the administrator to deploy POSIX-like systems such as Linux or BSD directly from a companion tool such as `debootstrap` on-demand and with maximum flexibility.
## Templates ## Templates
The PVC Provisioner features three categories of templates to specify the resources allocated to the virtual machine. They are: System Templates, Network Templates, and Disk Templates. The PVC Provisioner features three categories of templates to specify the resources allocated to the virtual machine. They are: System Templates, Network Templates, and Disk Templates.