The Base role is optional: if an administrator so chooses, they can bypass this role and configure things manually. That said, for the proper functioning of the PVC role, the Base role should always be applied first.
The PVC role configures all the dependencies of PVC, including storage, networking, and databases, then installs the PVC daemon itself. Specifically, it will, in order:
* Install Ceph, configure and bootstrap a new cluster if `bootstrap=yes` is set, configure the monitor and manager daemons, and start up the cluster ready for the addition of OSDs via the client interface (coordinators only).
* Install, configure, and if `bootstrap=yes` is set, bootstrap a Zookeeper cluster (coordinators only).
Once the entire playbook has run for the first time against a given host, the host will be rebooted to apply all the configured services. On startup, the system should immediately launch the PVC daemon, check in to the Zookeeper cluster, and become ready. The node will be in `flushed` state on its first boot; the administrator will need to run `pvc node unflush <node>` to set the node into active state ready to handle virtual machines. On the first bootstrap run, the administrator will also have to configure storage block devices (OSDs), networks, etc. For full details, see [the main getting started page](/getting-started).
After cloning the `pvc-ansible` repo, set up a set of configurations for your cluster. One copy of the `pvc-ansible` repository can manage an unlimited number of clusters with differing configurations.
All files created during initial setup should be stored outside the `pvc-ansible` repository, as they will be ignored by the main Git repository by default. It is recommended to set up a separate folder, either standalone or as its own Git repository, to contain your files, then symlink them back into the main repository at the appropriate places outlined below.
Create a `hosts` file containing the clusters as groups, then the list of hosts within each cluster group. The `hosts.default` file can be used as a template.
Create a `files/<cluster>` folder to hold the cluster-created static configuration files. Until the first bootstrap run, this directory will be empty.
Create a `group_vars/<cluster>` folder to hold the cluster configuration variables. The `group_vars/default` directory can be used as an example.
Before bootstrapping a cluster, see the section on [PVC Ansible configuration variables](/manuals/ansible/#pvc-ansible-configuration-variables) to configure the cluster.
Bootstrapping a cluster can be done using the main `pvc.yml` playbook. Generally, a bootstrap run should be limited to the coordinators of the cluster to avoid potential race conditions or strange bootstrap behaviour. The special variable `bootstrap=yes` must be set to indicate that a cluster bootstrap is to be requested.
**WARNING:** Do not run the playbook with `bootstrap=yes`*except during the very first run against a freshly-installed set of coordinator nodes*. Running it against an existing cluster will result in the complete failure of the cluster, the destruction of all data, or worse.
### Adding new nodes
Adding new nodes to an existing cluster can be done using the main `pvc.yml` playbook. The new node(s) should be added to the `group_vars` configuration `node_list`, then the playbook run against all hosts in the cluster with no special flags or limits. This will ensure the entire cluster is updated with the new information, while simultaneously configuring the new node.
For general, day-to-day software updates such as base system updates or upgrading to newer PVC versions, a special playbook, `oneshot/update-pvc-cluster.yml`, is provided. This playbook will gracefully update and upgrade all PVC nodes in the cluster, flush them, reboot them, and then unflush them. This operation should be completely transparent to VMs on the cluster.
For more advanced updates, such as changing configurations in the `group_vars`, the main `pvc.yml` playbook can be used to deploy the changes across all hosts. Note that this may cause downtime due to node reboots if certain configurations change, and it is not recommended to use this process frequently.
# PVC Ansible configuration manual
This manual documents the various `group_vars` configuration options for the `pvc-ansible` framework. We assume that the administrator is generally familiar with Ansible and its operation.
The `group_vars` folder contains configuration variables for all clusters managed by your local copy of `pvc-ansible`. Each cluster has a distinct set of `group_vars` to allow different configurations for each cluster.
This section outlines the various configuration options available in the `group_vars` configuration; the `group_vars/default` directory contains an example set of variables, split into two files (`base.yml` and `pvc.yml`), that set every listed configuration option.
* Settings may be `required`, `optional`, or `ignored`. Ignored settings are used for human-readability in the configuration but are ignored by the actual role.
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
* If a particular `<setting>` is marked `optional`, and a latter setting is marked `depends on <setting>`, the latter is ignored unless the `<setting>` is specified.
The domain name of the PVC cluster nodes. This is the domain portion of the FQDN of each node, and should usually be the domain of the `upstream` network.
The IPMI username used by PVC to communicate with the node management controllers. This user should be created on each node's IPMI before deploying the cluster, and should have, at minimum, permission to read and alter the node's power state.
A list of non-root users, their UIDs, and SSH public keys, that are able to access the server. At least one non-root user should be specified to administer the nodes. These users will not have a password set; only key-based login is supported. Each list element contains the following sub-elements:
A dictionary of networks to configure on the nodes.
The key will be used to "name" the interface file under `/etc/network/interfaces.d`, but otherwise the `device` is the real name of the device (e.g. `iface [device] inet ...`.
The three required networks are: `upstream`, `cluster`, `storage`. If `storage` is configured identically to `cluster`, the two networks will be collapsed into one; for details on this, please see the [documentation about the storage network](/cluster-architecture/#storage-connecting-ceph-daemons-with-each-other-and-with-osds).
Additional networks can also be specified here to automate their configuration. In the above example, a "bondU" interface is configured, which the remaining required networks use as their `raw_device`.
The number of output console lines to log for each VM, to be used by the console log endpoints (`pvc vm log`).
If unset, a default value of "1000" is set in the role defaults.
#### `pvc_vm_shutdown_timeout`
* *optional*
The number of seconds to wait for a VM to `shutdown` before it is forced off.
A value of "0" disables this functionality.
If unset, a default value of "180" is set in the role defaults.
#### `pvc_keepalive_interval`
* *optional*
The number of seconds between node keepalives.
If unset, a default value of "5" is set in the role defaults.
**WARNING**: Changing this value is not recommended except in exceptional circumstances.
#### `pvc_fence_intervals`
* *optional*
The number of keepalive intervals to be missed before other nodes consider a node `dead` and trigger the fencing process. The total time elapsed will be `pvc_keepalive_interval * pvc_fence_intervals`.
If unset, a default value of "6" is set in the role defaults.
**NOTE**: This is not the total time until a node is fenced. A node has a further 6 (hardcoded) `pvc_keepalive_interval`s ("saving throw" attepmts) to try to send a keepalive before it is actually fenced. Thus, with the default values, this works out to a total of 60 +/- 5 seconds between a node crashing, and it being fenced. An administrator of a very important cluster may want to set this lower, perhaps to 2, or even 1, leaving only the "saving throws", though this is not recommended for most clusters, due to timing overhead from various other subsystems.
#### `pvc_suicide intervals`
* *optional*
The number of keepalive intervals without the ability to send a keepalive before a node considers *itself* to be dead and reboots itself.
A value of "0" disables this functionality.
If unset, a default value of "0" is set in the role defaults.
**WARNING**: This option is provided to allow additional flexibility in fencing behaviour. Normally, it is not safe to set a `pvc_fence_failed_action` of `migrate`, since if the other nodes cannot fence a node its VMs cannot be safely started on other nodes. This would also apply to nodes without IPMI-over-LAN which could not be fenced normally. This option provides an alternative way to guarantee this safety, at least in situations where the node can still reliably shut itself down (i.e. it is not hard-locked). The administrator should however take special care and thoroughly test their system before using these alternative fencing options in production, as the results could be disasterous.
#### `pvc_fence_successful_action`
* *optional*
The action the cluster should take upon a successful node fence with respect to running VMs. Must be one of, unquoted: `migrate`, `None`.
If unset, a default value of "migrate" is set in the role defaults.
An administrator can set the value "None" to disable automatic VM recovery migrations after a node fence.
#### `pvc_fence_failed_action`
* *optional*
The action the cluster should take upon a failed node fence with respect to running VMs. Must be one of, unquoted: `migrate`, `None`.
If unset, a default value of "None" is set in the role defaults.
**WARNING**: See the warning in the above `pvc_suicide_intervals` section for details on the purpose of this option. Do not set this option to "migrate" unless you have also set `pvc_suicide_intervals` to a non-"0" value and understand the caveats and risks.
#### `pvc_fence_migrate_target_selector`
* *optional*
The migration selector to use when running a `migrate` command after a node fence. Must be one of, unquoted: `mem`, `load`, `vcpu`, `vms`.
If unset, a default value of "mem" is set in the role defaults.
**NOTE**: These values map to the standard VM meta `selector` options, and determine how nodes select where to run the migrated VMs.
#### `pvc_osd_memory_limit`
* *optional*
The memory limit, in bytes, to pass to the Ceph OSD processes. Only set once, during cluster bootstrap; subsequent changes to this value must be manually made in the `files/*/ceph.conf` static configuration for the cluster in question.
If unset, a default value of "4294967296" (i.e. 4GB) is set in the role defaults.
As per Ceph documentation, the minimum value possible is "939524096" (i.e. ~1GB), and the default matches the Ceph system default. Setting a lower value is only recommended for systems with relatively low memory availability, where the default of 4GB per OSD is too large; it is recommended to increase the total system memory first before tweaking this setting to ensure optimal storage performance across all workloads.
#### `pvc_zookeeper_heap_limit`
* *optional*
The memory limit to pass to the Zookeeper Java process for its heap.
If unset, a default vlue of "256M" is set in the role defaults.
The administrator may set this to a lower value on memory-constrained systems or if the memory usage of the Zookeeper process becomes excessive.
#### `pvc_zookeeper_stack_limit`
* *optional*
The memory limit to pass to the Zookeeper Java process for its stack.
If unset, a defautl value of "512M" is set in the role defaults.
The administrator may set this to a lower value on memory-constrained systems or if the memory usage of the Zookeeper process becomes excessive.
A list of all nodes in the PVC cluster and their node-specific configurations. Each node must be present in this list. Each list element contains the following sub-elements:
The IPMI username for the node management controller. Unless a per-host override is required, should usually use the previously-configured global `username_ipmi_host`. All notes from that entry apply.
The IPMI password for the node management controller. Unless a per-host override is required, should usually use the previously-configured global `passwordname_ipmi_host`. All notes from that entry apply.
The device name of the underlying network interface to be used for "bridged"-type client networks. For each "bridged"-type network, an IEEE 802.3q vLAN and bridge will be created on top of this device to pass these networks. In most cases, using the reflexive `networks['cluster']['raw_device']` or `networks['upstream']['raw_device']` from the Base role is sufficient.
The next set of entries is hard-coded to use the values from the global `networks` list. It should not need to be changed under most circumstances. Refer to the previous sections for specific notes about each entry.