Correct spelling in all documentation
This commit is contained in:
parent
427ef9454a
commit
48764f2e70
|
@ -6,9 +6,9 @@ Server management and system administration have changed significantly in the la
|
|||
|
||||
As part of this trend, the rise of IaaS (Infrastructure as a Service) has created an entirely new way for administrators and, increasingly, developers, to interact with servers. They need to be able to provision virtual machines easily and quickly, to ensure those virtual machines are reliable and consistent, and to avoid downtime wherever possible.
|
||||
|
||||
However, the state of the Free Software, virtual management ecosystem in 2019 is quite dissapointing. On the one hand are the giant, IaaS products like OpenStack and CloudStack. These are massive pieces of software, featuring dozens of interlocking parts, designed for massive clusters and public cloud deployments. They're great for a "hyperscale" provider, a large-scale SaaS/IaaS provider, or an enterprise. But they're not designed for small teams or small clusters. On the other hand, tools like Proxmox, oVirt, and even good old fashioned shell scripts are barely scalable, are showing their age, and have become increasily unweildy for advanced usecases - great for one server, not so great for 9 in a highly-available cluster. Not to mention the constant attempts to monitize by throwing features behind Enterprise subscriptions. In short, there is a massive gap between the old-style, pet-based virtualization and the modern, large-scale, IaaS-type virtualization.
|
||||
However, the state of the Free Software, virtual management ecosystem in 2019 is quite disappointing. On the one hand are the giant, IaaS products like OpenStack and CloudStack. These are massive pieces of software, featuring dozens of interlocking parts, designed for massive clusters and public cloud deployments. They're great for a "hyperscale" provider, a large-scale SaaS/IaaS provider, or an enterprise. But they're not designed for small teams or small clusters. On the other hand, tools like Proxmox, oVirt, and even good old fashioned shell scripts are barely scalable, are showing their age, and have become increasingly unwieldy for advanced use-cases - great for one server, not so great for 9 in a highly-available cluster. Not to mention the constant attempts to monetize by throwing features behind Enterprise subscriptions. In short, there is a massive gap between the old-style, pet-based virtualization and the modern, large-scale, IaaS-type virtualization.
|
||||
|
||||
PVC aims to bridge this gap. As a Python 3-based, fully-Free Software, scalable, and redundant private "cloud" that isn't afraid to say it's for small clusters, PVC is able to provide the simple, easy-to-use, small cluster you need today, with minimal administrator work, while being able to scale as your system grows, supporting hundreds or thousands of VMs across dozens of nodes. High availability is baked right into the core software, giving you piece of mind about your cluster, and ensuring that your systems keep running no matter what happens. And the interface couldn't be easier - a straightforward Click-based CLI and a Flask-based HTTP API provide access to the cluster for you to manage, either directly or though scripts or WebUIs. And since everything is Free Software, you can always inspect it, customize it to your usecase, add features, and contribute back to the community if you so choose.
|
||||
PVC aims to bridge this gap. As a Python 3-based, fully-Free Software, scalable, and redundant private "cloud" that isn't afraid to say it's for small clusters, PVC is able to provide the simple, easy-to-use, small cluster you need today, with minimal administrator work, while being able to scale as your system grows, supporting hundreds or thousands of VMs across dozens of nodes. High availability is baked right into the core software, giving you piece of mind about your cluster, and ensuring that your systems keep running no matter what happens. And the interface couldn't be easier - a straightforward Click-based CLI and a Flask-based HTTP API provide access to the cluster for you to manage, either directly or though scripts or WebUIs. And since everything is Free Software, you can always inspect it, customize it to your use-case, add features, and contribute back to the community if you so choose.
|
||||
|
||||
PVC provides all the features you'd expect of a "cloud" system - easy management of VMs, including live migration between nodes for maximum uptime; virtual networking support using either vLANs or EVPN-based VXLAN; shared, redundant, object-based storage using Ceph, and a convenient API interface for building your own interfaces. It is able to do this without being excessively complex, and without making sacrifices for legacy ideas.
|
||||
|
||||
|
@ -16,7 +16,7 @@ If you need to run virtual machines, and don't have the time to learn the Stacks
|
|||
|
||||
## Cluster Architecture
|
||||
|
||||
A PVC cluster is based around "nodes", which are physical servers on which the various daemons, storage, networks, and virtual machines run. Each node is self-contained; it is able to perform any and all cluster functions if needed, and there are no segmentations of function between different types of physical hosts.
|
||||
A PVC cluster is based around "nodes", which are physical servers on which the various daemons, storage, networks, and virtual machines run. Each node is self-contained; it is able to perform any and all cluster functions if needed, and there is no segmentation of function between different types of physical hosts.
|
||||
|
||||
A limited number of nodes, called "coordinators", are statically configured to provide additional services for the cluster. All databases for instance run on the coordinators, but not other nodes. This prevents any issues with scaling database clusters across dozens of hosts, while still retaining maximum redundancy. In a standard configuration, 3 or 5 nodes are designated as coordinators, and additional nodes connect to the coordinators for database access where required. For quorum purposes, there should always be an odd number of coordinators, and exceeding 5 is likely not required even for large clusters.
|
||||
|
||||
|
@ -46,7 +46,7 @@ The CLI client manual can be found at the [CLI manual page](/manuals/cli).
|
|||
|
||||
### API client
|
||||
|
||||
The HTTP API client is a more advanced interface to the PVC cluster, suitable for creating custom interfaces for PVC and providing better access control than the CLI. It is a Python 3 Flask application which also interfaces directly with the Zookeper cluster, and provides services on port 7370 by default. The API features a basic, key-based authentication mechanism to prevent unauthorized access, though this is optional, and can also provide HTTPS support if required for maximum security over public networks. With the exception of cluster initialization, the API can perform all functions that the CLI client can using a RESTful layout. Requests return JSON, and POST requests expect HTTP form bodies.
|
||||
The HTTP API client is a more advanced interface to the PVC cluster, suitable for creating custom interfaces for PVC and providing better access control than the CLI. It is a Python 3 Flask application which also interfaces directly with the Zookeeper cluster, and provides services on port 7370 by default. The API features a basic, key-based authentication mechanism to prevent unauthorized access, though this is optional, and can also provide HTTPS support if required for maximum security over public networks. With the exception of cluster initialization, the API can perform all functions that the CLI client can using a RESTful layout. Requests return JSON, and POST requests expect HTTP form bodies.
|
||||
|
||||
Further information about the API client architecture can be found at the [API client architecture page](/architecture/api).
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ The Base role configures a node to a specific, standard base Debian system, with
|
|||
|
||||
* Installing the custom PVC repository at Boniface Labs.
|
||||
|
||||
* Removing several unneccessary packages and instaling numerous additional packages.
|
||||
* Removing several unnecessary packages and installing numerous additional packages.
|
||||
|
||||
* Automatically configuring network interfaces based on the `group_vars` configuration.
|
||||
|
||||
|
@ -16,7 +16,7 @@ The Base role configures a node to a specific, standard base Debian system, with
|
|||
|
||||
* Installing and configuring rsyslog, postfix, ntpd, ssh, and fail2ban.
|
||||
|
||||
* Creating the users sepecified in the `group_vars` configuration.
|
||||
* Creating the users specified in the `group_vars` configuration.
|
||||
|
||||
* Installing custom MOTDs, bashrc files, vimrc files, and other useful configurations for each user.
|
||||
|
||||
|
@ -30,7 +30,7 @@ The PVC role configures all the dependencies of PVC, including storage, networki
|
|||
|
||||
* Install, configure, and if `bootstrap=yes` is set, bootstrap a Zookeeper cluster (coordinators only).
|
||||
|
||||
* Install, configure, and if `bootstrap=yes` is set`, bootstrap a Patroni Postgresql cluster for the PowerDNS aggregator (coordinators only).
|
||||
* Install, configure, and if `bootstrap=yes` is set`, bootstrap a Patroni PostgreSQL cluster for the PowerDNS aggregator (coordinators only).
|
||||
|
||||
* Install and configure Libvirt.
|
||||
|
||||
|
|
|
@ -24,11 +24,11 @@ Of this, some amount of CPU and RAM will be used by the storage subsystem and th
|
|||
|
||||
## Storage Layout: Ceph and OSDs
|
||||
|
||||
The Ceph subsystem of PVC, if enabled, creates a "hyperconverged" setup whereby storage and VM hypervisor functions are colocated onto the same physical servers. The performance of the storage must be taken into account when sizing the nodes as mentioned above.
|
||||
The Ceph subsystem of PVC, if enabled, creates a "hyperconverged" setup whereby storage and VM hypervisor functions are collocated onto the same physical servers. The performance of the storage must be taken into account when sizing the nodes as mentioned above.
|
||||
|
||||
The Ceph system is laid out similar to the other daemons. The Ceph Monitor and Manager functions are delegated to the Coordinators over the cluster network, with all nodes connecting to these hosts to obtain the CRUSH maps and select OSD disks. OSDs are then distributed on all hosts, including non-coordinator hypervisors, and communicate with clients over the cluster network and with each other (for replication, failover, etc.) over the storage network.
|
||||
|
||||
Without exception for proper redundancy, Ceph pools on the cluster use the `copies=3` `mincopies=2` replication scheme. That is to say, for each 4MB "object" the cluster stores, it will store 3 copies on 3 different nodes; if one copy becomes unavalable, due to a node maintenance or failure, the other 2 copies continue to enable read/write access to the clouster; if two copies become unavainable, writes to the cluster will block however reads will still proceed from the single remaining copy, allowing recovery. More than 3 nodes running OSD disks increases the resiliency of the cluster, however object placement is decided at write time and is evenly distributed across the cluster, so even in very large clusters only 1 node can be down at one time and writes guaranteed to succeed.
|
||||
Without exception for proper redundancy, Ceph pools on the cluster use the `copies=3` `mincopies=2` replication scheme. That is to say, for each 4MB "object" the cluster stores, it will store 3 copies on 3 different nodes; if one copy becomes unavailable, due to a node maintenance or failure, the other 2 copies continue to enable read/write access to the cluster; if two copies become unavailable, writes to the cluster will block however reads will still proceed from the single remaining copy, allowing recovery. More than 3 nodes running OSD disks increases the resiliency of the cluster, however object placement is decided at write time and is evenly distributed across the cluster, so even in very large clusters only 1 node can be down at one time and writes guaranteed to succeed.
|
||||
|
||||
In this configuration, therefore, each 1MB of storage at the VM layer consumes 3MB (3 copies) of storage at the raw disk layer. Size OSD disks accordingly to ensure sufficient storage space and performance. Future versions of PVC may support more complex Ceph storage layouts, such as `copies=4` `mincopies=2` or multiple-parity Erasure Coding pools.
|
||||
|
||||
|
@ -90,9 +90,9 @@ Generally the cluster network should be completely separate from the upstream ne
|
|||
|
||||
The storage network is an unrouted private network used by the PVC node storage OSDs to communicated with each other, without using the main cluster network and introducing potentially large amounts of traffic there.
|
||||
|
||||
Nodes in this network are generally assigned IPs automatically based on their node number. The network should be large enough to include all nodes squentially.
|
||||
Nodes in this network are generally assigned IPs automatically based on their node number. The network should be large enough to include all nodes sequentially.
|
||||
|
||||
The administrator may choose to colocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceved ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth.
|
||||
The administrator may choose to collocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceived ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth.
|
||||
|
||||
### Bridged (unmanaged) Client Networks
|
||||
|
||||
|
@ -102,7 +102,7 @@ With this client network type, PVC does no management of the network. This is le
|
|||
|
||||
### VXLAN (managed) Client Networks
|
||||
|
||||
The sceond type of client network is the managed VXLAN network. These networks make use of BGP EVPN, managed by route reflection on the coordinators, to create virtual layer 2 Ethernet tunnels between all nodes in the cluster. VXLANs are then run on top of these virtual layer 2 tunnels, with the primary PVC node providing routing, DHCP, and DNS functionality to the network via a single IP address.
|
||||
The second type of client network is the managed VXLAN network. These networks make use of BGP EVPN, managed by route reflection on the coordinators, to create virtual layer 2 Ethernet tunnels between all nodes in the cluster. VXLANs are then run on top of these virtual layer 2 tunnels, with the primary PVC node providing routing, DHCP, and DNS functionality to the network via a single IP address.
|
||||
|
||||
With this client network type, PVC is in full control of the network. No vLAN configuration is required on the switchports of each node's cluster network as the virtual layer 2 tunnel travels over the cluster layer 3 network. All client network traffic destined for outside the network will exit via the upstream network of the primary coordinator node; note that this may introduce a bottleneck and tromboning if there is a large amount of external and/or inter-network traffic on the cluster. The administrator should consider this carefully when sizing the cluster network.
|
||||
|
||||
|
@ -150,11 +150,11 @@ This section provides diagrams of 3 possible node configurations, providing an i
|
|||
|
||||
*Above: A diagram of a simple 3-node cluster; all nodes are coordinators, single 1Gbps network interface per node, collapsed cluster and storage networks*
|
||||
|
||||
#### Midsized 8-node cluster with 3 coordinators
|
||||
#### Mid-sized 8-node cluster with 3 coordinators
|
||||
|
||||
![8-node cluster](/images/8-node-cluster.png)
|
||||
|
||||
*Above: A diagram of a midsized 8-node cluster with 3 coordinators, dual bonded 10Gbps network interfaces per node*
|
||||
*Above: A diagram of a mid-sized 8-node cluster with 3 coordinators, dual bonded 10Gbps network interfaces per node*
|
||||
|
||||
#### Large 17-node cluster with 5 coordinators
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ During startup, the system scans the Zookeeper database and sets up the required
|
|||
|
||||
## Startup sequence
|
||||
|
||||
The daemon startup sequence is documented below. The main daemon entrypoint is `Daemon.py` inside the `pvcd` folder, which is called from the `pvcd.py` stub file.
|
||||
The daemon startup sequence is documented below. The main daemon entry-point is `Daemon.py` inside the `pvcd` folder, which is called from the `pvcd.py` stub file.
|
||||
|
||||
0. The configuration is read from `/etc/pvc/pvcd.yaml` and the configuration object set up.
|
||||
|
||||
|
@ -36,7 +36,7 @@ The daemon startup sequence is documented below. The main daemon entrypoint is `
|
|||
|
||||
0. The node checks if Libvirt is accessible.
|
||||
|
||||
0. The node starts up the NFT firewall if applicable and configures the base ruleset.
|
||||
0. The node starts up the NFT firewall if applicable and configures the base rule-set.
|
||||
|
||||
0. The node ensures that `dnsmasq` is stopped (legacy check, might be safe to remove eventually).
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ PVC aims to be easy to deploy, letting you get on with managing your cluster in
|
|||
|
||||
This guide will walk you through setting up a simple 3-node PVC cluster from scratch, ending with a fully-usable cluster ready to provision virtual machines. Note that all domains, IP addresses, etc. used are examples - when following this guide, be sure to modify the commands and configurations to suit your needs.
|
||||
|
||||
### Part One - Prepararing for bootstrap
|
||||
### Part One - Preparing for bootstrap
|
||||
|
||||
0. Download the latest copy of the [`pvc-installer`](https://github.com/parallelvirtualcluster/pvc-installer) and [`pvc-ansible`](https://github.com/parallelvirtualcluster/pvc-ansible) repositories to your local machine.
|
||||
|
||||
|
@ -38,7 +38,7 @@ This guide will walk you through setting up a simple 3-node PVC cluster from scr
|
|||
|
||||
0. Follow the prompts from the installer ISO. It will ask for a hostname, the system disk device to use, the initial network interface to configure as well as either DHCP or static IP information, and finally either an HTTP URL containing an SSH `authorized_keys` to use for the `deploy` user, or a password for this user if key auth is unavailable.
|
||||
|
||||
0. Wait for the installer to complete. It will provide some next steps at the end, and wait for the administrator to acknowledge via an "Enter" keypress. The node will now reboot into the base PVC system.
|
||||
0. Wait for the installer to complete. It will provide some next steps at the end, and wait for the administrator to acknowledge via an "Enter" key-press. The node will now reboot into the base PVC system.
|
||||
|
||||
0. Repeat the above steps for all 3 initial nodes. On boot, they will display their configured IP address to be used in the next steps.
|
||||
|
||||
|
@ -48,7 +48,7 @@ This guide will walk you through setting up a simple 3-node PVC cluster from scr
|
|||
|
||||
0. Verify connectivity from your administrative host to the 3 initial nodes, including SSH access. Accept their host keys as required before proceeding as Ansible does not like those prompts.
|
||||
|
||||
0. Verify your `group_vars` setup from part one, as errors here may require a reinstallation and restart of the bootstrap process.
|
||||
0. Verify your `group_vars` setup from part one, as errors here may require a re-installation and restart of the bootstrap process.
|
||||
|
||||
0. Perform the initial bootstrap. From the `pvc-ansible` repository directory, execute the following `ansible-playbook` command, replacing `<cluster_name>` with the Ansible group name from the `hosts` file. Make special note of the additional `bootstrap=yes` variable, which tells the playbook that this is an initial bootstrap run.
|
||||
`$ ansible-playbook -v -i hosts pvc.yml -l <cluster_name> -e bootstrap=yes`
|
||||
|
@ -78,7 +78,7 @@ All steps in this and following sections can be performed using either the CLI c
|
|||
`$ pvc storage ceph osd add --weight 1.0 pvchv3 /dev/sdb`
|
||||
`$ pvc storage ceph osd add --weight 1.0 pvchv3 /dev/sdc`
|
||||
|
||||
**NOTE:** On the CLI, the `--weight` argument is optional, and defaults to `1.0`. In the API, it must be specified explicitly. OSD weights determine the relative amount of data which can fit onto each OSD. Under normal circumstances, you would want all OSDs to be of identical size, and hence all should have the same weight. If your OSDs are instead different sizes, the weight should be proportial to the size, e.g. `1.0` for a 100GB disk, `2.0` for a 200GB disk, etc. For more details, see the Ceph documentation.
|
||||
**NOTE:** On the CLI, the `--weight` argument is optional, and defaults to `1.0`. In the API, it must be specified explicitly. OSD weights determine the relative amount of data which can fit onto each OSD. Under normal circumstances, you would want all OSDs to be of identical size, and hence all should have the same weight. If your OSDs are instead different sizes, the weight should be proportional to the size, e.g. `1.0` for a 100GB disk, `2.0` for a 200GB disk, etc. For more details, see the Ceph documentation.
|
||||
|
||||
**NOTE:** OSD commands wait for the action to complete on the node, and can take some time (up to 30s normally). Be cautious of HTTP timeouts when using the API to perform these steps.
|
||||
|
||||
|
@ -103,7 +103,7 @@ All steps in this and following sections can be performed using either the CLI c
|
|||
0. Determine a domain name, IPv4, and/or IPv6 network for your first client network, and any other client networks you may wish to create. For this guide we will create a single "managed" virtual client network with DHCP.
|
||||
|
||||
0. Create the virtual network. The general command for an IPv4-only network with DHCP is:
|
||||
`$ pvc network add <vni_id> --type <type> --description <spaceless_description> --domain <domain> --ipnet <ipv4_network_in_CIDR> --gateway <ipv4_gateway_address> --dhcp --dhcp-start <first_address> --dhcp-end <last_address>`
|
||||
`$ pvc network add <vni_id> --type <type> --description <space-less_description> --domain <domain> --ipnet <ipv4_network_in_CIDR> --gateway <ipv4_gateway_address> --dhcp --dhcp-start <first_address> --dhcp-end <last_address>`
|
||||
|
||||
For example, to create the managed (EVPN VXLAN) network `100` with subnet `10.100.0.0/24`, gateway `.1` and DHCP from `.100` to `.199`, run the command as follows:
|
||||
`$ pvc network add 100 --type managed --description my-managed-network --domain myhosts.local --ipnet 10.100.0.0/24 --gateway 10.100.0.1 --dhcp --dhcp-start 10.100.0.100 --dhcp-end 10.100.0.199`
|
||||
|
|
|
@ -158,7 +158,7 @@ The email address of the root user, at the `local_domain`. Usually `root`, but c
|
|||
|
||||
* *optional*
|
||||
|
||||
A list of additional entries for the `/etc/hosts` files on the nodes. Each list element contains the following subelements:
|
||||
A list of additional entries for the `/etc/hosts` files on the nodes. Each list element contains the following sub-elements:
|
||||
|
||||
##### `name`
|
||||
|
||||
|
@ -172,7 +172,7 @@ The IP address of the entry.
|
|||
|
||||
* *required*
|
||||
|
||||
A list of non-root users, their UIDs, and SSH public keys, that are able to access the server. At least one non-root user should be specified to administer the nodes. These users will not have a password set; only key-based login is supported. Each list element contains the following subelements:
|
||||
A list of non-root users, their UIDs, and SSH public keys, that are able to access the server. At least one non-root user should be specified to administer the nodes. These users will not have a password set; only key-based login is supported. Each list element contains the following sub-elements:
|
||||
|
||||
##### `name`
|
||||
|
||||
|
@ -248,7 +248,7 @@ The domain name for the network. For the "upstream" network, should usually be `
|
|||
|
||||
* *required*
|
||||
|
||||
The CIDR-formated subnet of the network. Individual nodes will be configured with specific IPs in this network in a later setting.
|
||||
The CIDR-formatted subnet of the network. Individual nodes will be configured with specific IPs in this network in a later setting.
|
||||
|
||||
##### `floating_ip`
|
||||
|
||||
|
@ -453,7 +453,7 @@ Generate using `uuidgen` or `pwgen -s 32` and adjusting length as required.
|
|||
|
||||
* *required*
|
||||
|
||||
A list of API tokens that are allowed to access the PVC API. At least one should be specified. Each list element contains the following subelements:
|
||||
A list of API tokens that are allowed to access the PVC API. At least one should be specified. Each list element contains the following sub-elements:
|
||||
|
||||
##### `description`
|
||||
|
||||
|
@ -491,7 +491,7 @@ The SSL private key, in text form, for the PVC API to use.
|
|||
|
||||
* *required*
|
||||
|
||||
The UUIS for Libvirt to communicate with the Ceph storage cluster. This UUID will be used in all VM configurations for the block device.
|
||||
The UUID for Libvirt to communicate with the Ceph storage cluster. This UUID will be used in all VM configurations for the block device.
|
||||
|
||||
Generate using `uuidgen`.
|
||||
|
||||
|
@ -521,7 +521,7 @@ Generate using `pwgen -s 16` and adjusting length as required.
|
|||
|
||||
The username of the PVC DNS aggregator database replication user.
|
||||
|
||||
#### `pvc_repliation_database_password`
|
||||
#### `pvc_replication_database_password`
|
||||
|
||||
* *required*
|
||||
|
||||
|
@ -557,7 +557,7 @@ A list of upstream routers to communicate BGP routes to.
|
|||
|
||||
* *required*
|
||||
|
||||
A list of all nodes in the PVC cluster and their node-specific configurations. Each node must be present in this list. Each list element contains the following subelements:
|
||||
A list of all nodes in the PVC cluster and their node-specific configurations. Each node must be present in this list. Each list element contains the following sub-elements:
|
||||
|
||||
##### `hostname`
|
||||
|
||||
|
@ -639,5 +639,5 @@ The IPMI password for the node management controller. Unless a per-host override
|
|||
|
||||
#### `pvc_<network>_*`
|
||||
|
||||
The next set of entries is hardcoded to use the values from the global `networks` list. It should not need to be changed under most circumstances. Refer to the previous sections for specific notes about each entry.
|
||||
The next set of entries is hard-coded to use the values from the global `networks` list. It should not need to be changed under most circumstances. Refer to the previous sections for specific notes about each entry.
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ The API accepts SSL certificate and key files via the `pvc-api.yaml` configurati
|
|||
|
||||
Authentication for the API is available using a static list of tokens. These tokens can be any long string, but UUIDs are typical and simple to use. Within `pvc-ansible`, the list of tokens can be specified in the `pvc.yaml` `group_vars` file. Usually, you'd want one token for each user of the API, such as a WebUI, a 3rd-party client, or an administrative user. Within the configuration, each token can have a description; this is mostly for administrative clarity and is not actually used within the API itself.
|
||||
|
||||
The API provides session-based login using the `/api/v1/auth/login` and `/api/v1/auth/logout` options. If authentication is not enabled, these endpoints return a JSON `message` of `Authentiation is disabled` and HTTP code 200.
|
||||
The API provides session-based login using the `/api/v1/auth/login` and `/api/v1/auth/logout` options. If authentication is not enabled, these endpoints return a JSON `message` of `Authentication is disabled` and HTTP code 200.
|
||||
|
||||
For one-time authentication, the `token` value can be specified to any API endpoint via the `X-Api-Key` header value. This is only checked if there is no valid session already established. If authentication is enabled, there is no valid session, and no `token` value is specified, the API will return a JSON `message` of `Authentication required` and HTTP code 401.
|
||||
|
||||
|
@ -260,7 +260,7 @@ Return a JSON document containing information about all cluster VMs. If `limit`
|
|||
|
||||
Define a new VM with Libvirt XML configuration `xml` (either single-line or human-readable multi-line).
|
||||
|
||||
If `node` is specified and is valid, the VM will be assigned to `node` instead of automatically determining the target node. If `node` is specified and not valid, auto-selection occurrs instead.
|
||||
If `node` is specified and is valid, the VM will be assigned to `node` instead of automatically determining the target node. If `node` is specified and not valid, auto-selection occurs instead.
|
||||
|
||||
If `selector` is specified and no specific and valid `node` is specified, the automatic node determination will use `selector` to determine the optimal node instead of the default for the cluster.
|
||||
|
||||
|
@ -333,7 +333,7 @@ Return the current host node, and last host node if applicable, for `<vm>`.
|
|||
|
||||
Change the current host node for `<vm>` by `action`, using live migration if possible, and using `shutdown` then `start` if not. `action` must be either `migrate` or `unmigrate`.
|
||||
|
||||
If `node` is specified and is valid, the VM will be assigned to `node` instead of automatically determining the target node. If `node` is specified and not valid, auto-selection occurrs instead.
|
||||
If `node` is specified and is valid, the VM will be assigned to `node` instead of automatically determining the target node. If `node` is specified and not valid, auto-selection occurs instead.
|
||||
|
||||
If `selector` is specified and no specific and valid `node` is specified, the automatic node determination will use `selector` to determine the optimal node instead of the default for the cluster.
|
||||
|
||||
|
@ -383,7 +383,7 @@ Add a new virtual network to the cluster. `vni` must be a valid VNI, either a vL
|
|||
|
||||
* `managed` for PVC-managed, VXLAN-based networks.
|
||||
|
||||
`domain` specifies a DNS domain for hosts in the network. DNS is aggregated and provded for all networks on the primary coordinator node.
|
||||
`domain` specifies a DNS domain for hosts in the network. DNS is aggregated and provided for all networks on the primary coordinator node.
|
||||
|
||||
`ip4_network` specifies a CIDR-formatted IPv4 netblock, usually RFC1918, for the network.
|
||||
|
||||
|
@ -469,7 +469,7 @@ Return a JSON document containing information about all active NFTables ACLs in
|
|||
|
||||
If `limit` is specified, return a JSON document containing information about all active NFTables ACLs with descriptions matching `limit` as fuzzy regex.
|
||||
|
||||
If `direction` is specified and is one of `in` or `out`, return a JSON codument listing all active NFTables ACLs in the specified direction only. If `direction` is invalid, return a failure.
|
||||
If `direction` is specified and is one of `in` or `out`, return a JSON document listing all active NFTables ACLs in the specified direction only. If `direction` is invalid, return a failure.
|
||||
|
||||
###### `POST`
|
||||
* Mandatory values: `description`, `direction`, `rule`
|
||||
|
|
|
@ -107,7 +107,7 @@ The (short) hostname of the node; host-specific.
|
|||
|
||||
* *required*
|
||||
|
||||
Whether to enable the hypervisor functionality of the PVC Daemon or not. This should usually be enabled except in advanced deployment scenarios (such as a dedicated quorum-keeping micronode or dedicated network routing node).
|
||||
Whether to enable the hypervisor functionality of the PVC Daemon or not. This should usually be enabled except in advanced deployment scenarios (such as a dedicated quorum-keeping micro-node or dedicated network routing node).
|
||||
|
||||
#### `functions` → `enable_networking`
|
||||
|
||||
|
@ -162,7 +162,7 @@ The IPv4 address for the gateway of the network. Usually applicable only to the
|
|||
* *optional*
|
||||
* *requires* `functions` → `enable_networking`
|
||||
|
||||
Configuration for coordinator functions on the node. Optional only if `enable_networking` is `False`. Not optional on non-coordinator hosts, though unused. Contains the following subentries.
|
||||
Configuration for coordinator functions on the node. Optional only if `enable_networking` is `False`. Not optional on non-coordinator hosts, though unused. Contains the following sub-entries.
|
||||
|
||||
##### `dns` → `database` → `host`
|
||||
|
||||
|
@ -210,7 +210,7 @@ The number of keepalive messages that can be missed before a node is considered
|
|||
|
||||
* *required*
|
||||
|
||||
The number of keepalive message that can be missed before a node consideres itself dead and forcibly resets itself. Note that, due to the large number of reasons a node could become unresponsive, the suicide interval alone should not be relied upon. The default is 0, which disables this functionality. If set, should usually be equal to or less than `fence_intervals` for maximum safety.
|
||||
The number of keepalive message that can be missed before a node considers itself dead and forcibly resets itself. Note that, due to the large number of reasons a node could become unresponsive, the suicide interval alone should not be relied upon. The default is 0, which disables this functionality. If set, should usually be equal to or less than `fence_intervals` for maximum safety.
|
||||
|
||||
#### `system` → `fencing` → `actions` → `successful_fence`
|
||||
|
||||
|
@ -224,7 +224,7 @@ The action to take regarding VMs once a node is *successfully* fenced, i.e. the
|
|||
|
||||
The action to take regarding VMs once a node fencing *fails*, i.e. the IPMI command to restart the node reports a failure. Can be one of `None`, to perform no action and the default, or `migrate` to migrate and start all failed VMs on other nodes.
|
||||
|
||||
**WARNING:** This functionality is potentially **dangerous** and can result in data loss or corruption in the VM disks; the post-fence migration process *explicitly clears RBD locks on the disk volumes*. It is designed only for specific and advanced usecases, such as servers that do not reliably report IPMI responses or servers without IPMI (not recommended; see the [cluster architecture documentation](/architecture/cluster)). If this is set to `migrate`, the `suicide_intervals` **must** be set to provide at least some guarantee that the VMs on the node will actually be terminated before this condition triggers. The administrator should think very carefully about their setup and potential failure modes before enabling this option.
|
||||
**WARNING:** This functionality is potentially **dangerous** and can result in data loss or corruption in the VM disks; the post-fence migration process *explicitly clears RBD locks on the disk volumes*. It is designed only for specific and advanced use-cases, such as servers that do not reliably report IPMI responses or servers without IPMI (not recommended; see the [cluster architecture documentation](/architecture/cluster)). If this is set to `migrate`, the `suicide_intervals` **must** be set to provide at least some guarantee that the VMs on the node will actually be terminated before this condition triggers. The administrator should think very carefully about their setup and potential failure modes before enabling this option.
|
||||
|
||||
#### `system` → `fencing` → `ipmi` → `host`
|
||||
|
||||
|
|
|
@ -26,11 +26,11 @@ As PVC does not currently feature any sort of automated tests, this is the prima
|
|||
|
||||
0. Verify console logs are operating (`pvc vm log -f`).
|
||||
|
||||
0. Migrate VM to another node via autoselection and back again (`pvc vm migrate` and `pvc vm unmigrate`).
|
||||
0. Migrate VM to another node via auto-selection and back again (`pvc vm migrate` and `pvc vm unmigrate`).
|
||||
|
||||
0. Manually shuffle VM between nodes and verify reachability on each node (`pvc vm move`).
|
||||
|
||||
0. Kill the VM and ensure restart occurrs (`virsh destroy`).
|
||||
0. Kill the VM and ensure restart occurs (`virsh destroy`).
|
||||
|
||||
0. Restart the VM (`pvc vm restart`).
|
||||
|
||||
|
|
Loading…
Reference in New Issue