pvc/docs/manuals/ansible.md

688 lines
20 KiB
Markdown
Raw Normal View History

# PVC Ansible architecture
The PVC Ansible setup and management framework is written in Ansible. It consists of two roles: `base` and `pvc`.
## Base role
The Base role configures a node to a specific, standard base Debian system, with a number of PVC-specific tweaks. Some examples include:
* Installing the custom PVC repository at Boniface Labs.
* Removing several unnecessary packages and installing numerous additional packages.
* Automatically configuring network interfaces based on the `group_vars` configuration.
* Configuring several general `sysctl` settings for optimal performance.
* Installing and configuring rsyslog, postfix, ntpd, ssh, and fail2ban.
* Creating the users specified in the `group_vars` configuration.
* Installing custom MOTDs, bashrc files, vimrc files, and other useful configurations for each user.
The end result is a standardized "PVC node" system ready to have the daemons installed by the PVC role.
## PVC role
The PVC role configures all the dependencies of PVC, including storage, networking, and databases, then installs the PVC daemon itself. Specifically, it will, in order:
* Install Ceph, configure and bootstrap a new cluster if `bootstrap=yes` is set, configure the monitor and manager daemons, and start up the cluster ready for the addition of OSDs via the client interface (coordinators only).
* Install, configure, and if `bootstrap=yes` is set, bootstrap a Zookeeper cluster (coordinators only).
* Install, configure, and if `bootstrap=yes` is set`, bootstrap a Patroni PostgreSQL cluster for the PowerDNS aggregator (coordinators only).
* Install and configure Libvirt.
* Install and configure FRRouting.
2020-11-25 10:36:48 -05:00
* Install and configure the main PVC daemon and API client, including initializing the PVC cluster (`pvc task init`).
## Completion
Once the entire playbook has run for the first time against a given host, the host will be rebooted to apply all the configured services. On startup, the system should immediately launch the PVC daemon, check in to the Zookeeper cluster, and become ready. The node will be in `flushed` state on its first boot; the administrator will need to run `pvc node unflush <node>` to set the node into active state ready to handle virtual machines.
2019-07-07 00:24:17 -04:00
# PVC Ansible configuration manual
This manual documents the various `group_vars` configuration options for the `pvc-ansible` framework. We assume that the administrator is generally familiar with Ansible and its operation.
## General usage
### Initial setup
After cloning the `pvc-ansible` repo, set up a set of configurations for your cluster. One copy of the `pvc-ansible` repository can manage an unlimited number of clusters with differing configurations.
All files created during initial setup should be stored outside the `pvc-ansible` repository, as they will be ignored by the main Git repository by default. It is recommended to set up a separate folder, either standalone or as its own Git repository, to contain your files, then symlink them back into the main repository at the appropriate places outlined below.
Create a `hosts` file containing the clusters as groups, then the list of hosts within each cluster group. The `hosts.default` file can be used as a template.
Create a `files/<cluster>` folder to hold the cluster-created static configuration files. Until the first bootstrap run, this directory will be empty.
Create a `group_vars/<cluster>` folder to hold the cluster configuration variables. The `group_vars/default` directory can be used as an example.
### Bootstrapping a cluster
Before bootstrapping a cluster, see the section on [PVC Ansible configuration variables](/manuals/ansible#pvc-ansible-configuration-variables) to configure the cluster.
Bootstrapping a cluster can be done using the main `pvc.yml` playbook. Generally, a bootstrap run should be limited to the coordinators of the cluster to avoid potential race conditions or strange bootstrap behaviour. The special variable `bootstrap=yes` must be set to indicate that a cluster bootstrap is to be requested.
**WARNING:** Do not run the playbook with `bootstrap=yes` *except during the very first run against a freshly-installed set of coordinator nodes*. Running it against an existing cluster will result in the complete failure of the cluster, the destruction of all data, or worse.
### Adding new nodes
Adding new nodes to an existing cluster can be done using the main `pvc.yml` playbook. The new node(s) should be added to the `group_vars` configuration `node_list`, then the playbook run against all hosts in the cluster with no special flags or limits. This will ensure the entire cluster is updated with the new information, while simultaneously configuring the new node.
### Reconfiguration and software updates
After modifying configuration settings in the `group_vars`, or to update PVC to the latest version on a release, deployment of updated cluster can be done using the main `pvc.yml` playbook. The configuration should be updated if required, then the playbook run against all hosts in the cluster with no special flags or limits.
## PVC Ansible configuration variables
The `group_vars` folder contains configuration variables for all clusters managed by your local copy of `pvc-ansible`. Each cluster has a distinct set of `group_vars` to allow different configurations for each cluster.
This section outlines the various configuration options available in the `group_vars` configuration; the `group_vars/default` directory contains an example set of variables, split into two files (`base.yml` and `pvc.yml`), that set every listed configuration option.
2019-07-28 18:26:08 -04:00
### Conventions
2019-07-28 20:26:57 -04:00
* Settings may be `required`, `optional`, or `ignored`. Ignored settings are used for human-readability in the configuration but are ignored by the actual role.
2019-07-28 20:26:57 -04:00
* Settings may `depends` on other settings. This indicates that, if one setting is enabled, the other setting is very likely `required` by that setting.
* If a particular `<setting>` is marked `optional`, and a latter setting is marked `depends on <setting>`, the latter is ignored unless the `<setting>` is specified.
2019-07-28 18:26:08 -04:00
### `base.yml`
Example configuration:
```
---
local_domain: upstream.local
username_ipmi_host: "pvc"
passwd_ipmi_host: "MyPassword2019"
passwdhash_root: "$6$shadowencryptedpassword"
logrotate_keepcount: 7
logrotate_interval: daily
username_email_root: root
2019-08-07 12:50:03 -04:00
hosts:
- name: testhost
ip: 127.0.0.1
admin_users:
- name: "myuser"
uid: 500
keys:
- "ssh-ed25519 MyKey 2019-06"
networks:
"upstream":
device: "bondU"
type: "bond"
bond_mode: "802.3ad"
bond_devices:
- "enp1s0f0"
- "enp1s0f1"
mtu: 1500
domain: "{{ local_domain }}"
subnet: "192.168.100.0/24"
floating_ip: "192.168.100.10/24"
gateway_ip: "192.168.100.1"
"cluster":
device: "vlan1001"
type: "vlan"
raw_device: "bondU"
mtu: 1500
domain: "pvc-cluster.local"
subnet: "10.0.0.0/24"
floating_ip: "10.0.0.254/24"
"storage":
device: "vlan1002"
type: "vlan"
raw_device: "bondU"
mtu: 1500
domain: "pvc-storage.local"
subnet: "10.0.1.0/24"
floating_ip: "10.0.1.254/24"
```
2019-07-28 18:26:08 -04:00
#### `local_domain`
* *required*
The domain name of the PVC cluster nodes. This is the domain portion of the FQDN of each node, and should usually be the domain of the `upstream` network.
2019-07-28 18:26:08 -04:00
#### `username_ipmi_host`
* *optional*
* *requires* `passwd_ipmi_host`
The IPMI username used by PVC to communicate with the node management controllers. This user should be created on each node's IPMI before deploying the cluster, and should have, at minimum, permission to read and alter the node's power state.
2019-07-28 18:26:08 -04:00
#### `passwd_ipmi_host`
* *optional*
* *requires* `username_ipmi_host`
The IPMI password, in plain text, used by PVC to communicate with the node management controllers.
Generate using `pwgen -s 16` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `passwdhash_root`
* *required*
The `/etc/shadow`-encoded root password for all nodes.
Generate using `pwgen -s 16`, adjusting length as required, and encrypt using `mkpasswd -m sha-512 <password> $( pwgen -s 8 )`.
2019-07-28 18:26:08 -04:00
#### `logrotate_keepcount`
* *required*
The number of `logrotate_interval` to keep system logs.
2019-07-28 18:26:08 -04:00
#### `logrotate_interval`
* *required*
The interval for rotating system logs. Must be one of: `hourly`, `daily`, `weekly`, `monthly`.
2019-07-28 18:26:08 -04:00
#### `username_email_root`
* *required*
The email address of the root user, at the `local_domain`. Usually `root`, but can be something like `admin` if needed.
2019-08-07 12:50:03 -04:00
#### `hosts`
* *optional*
2019-08-08 20:36:25 -04:00
A list of additional entries for the `/etc/hosts` files on the nodes. Each list element contains the following sub-elements:
2019-08-07 12:50:03 -04:00
##### `name`
The hostname of the entry.
##### `ip`
The IP address of the entry.
2019-07-28 18:26:08 -04:00
#### `admin_users`
* *required*
2019-08-08 20:36:25 -04:00
A list of non-root users, their UIDs, and SSH public keys, that are able to access the server. At least one non-root user should be specified to administer the nodes. These users will not have a password set; only key-based login is supported. Each list element contains the following sub-elements:
2019-07-28 18:26:08 -04:00
##### `name`
* *required*
The name of the user.
2019-07-28 18:26:08 -04:00
##### `uid`
* *required*
The Linux UID of the user. Should usually start at 500 and increment for each user.
2019-07-28 18:26:08 -04:00
##### `keys`
* *required*
A list of SSH public key strings, in `authorized_keys` line format, for the user.
2019-07-28 18:26:08 -04:00
#### `networks`
* *required*
A dictionary of networks to configure on the nodes. Three networks are required by all PVC clusters, though additional networks may be configured here as well.
The three required networks are: `upstream`, `cluster`, `storage`.
Within each `network` element, the following options may be specified:
2019-07-28 18:26:08 -04:00
##### `device`
* *required*
The network device name.
2019-07-28 18:26:08 -04:00
##### `type`
* *required*
The type of network device. Must be one of: `nic`, `bond`, `vlan`.
2019-07-28 18:26:08 -04:00
##### `bond_mode`
* *required* if `type` is `bond`
The Linux bonding/`ifenslave` mode for the cluster. Must be a valid Linux bonding mode.
2019-07-28 18:26:08 -04:00
##### `bond_devices`
* *required* if `type` is `bond`
The list of physical (`nic`) interfaces to bond.
2019-07-28 18:26:08 -04:00
##### `raw_device`
* *required* if `type` is `vlan`
The underlying interface for the vLAN.
2019-07-28 18:26:08 -04:00
##### `mtu`
* *required*
The MTU of the interface. Ensure that the underlying network infrastructure can support the configured MTU.
2019-07-28 18:26:08 -04:00
##### `domain`
* *required*
The domain name for the network. For the "upstream" network, should usually be `local_domain`.
2019-07-28 18:26:08 -04:00
##### `subnet`
* *required*
2019-08-08 20:36:25 -04:00
The CIDR-formatted subnet of the network. Individual nodes will be configured with specific IPs in this network in a later setting.
2019-07-28 18:26:08 -04:00
##### `floating_ip`
* *required*
A CIDR-formatted IP address in the network to act as the cluster floating IP address. This IP address will follow the primary coordinator.
2019-07-28 18:26:08 -04:00
##### `gateway_ip`
* *optional*
A non-CIDR gateway IP address for the network.
2019-07-28 18:26:08 -04:00
### `pvc.yml`
Example configuration:
```
---
pvc_log_to_file: False
pvc_log_to_stdout: True
pvc_log_colours: False
pvc_log_dates: False
pvc_log_keepalives: True
pvc_log_keepalive_cluster_details: True
pvc_log_keepalive_storage_details: True
pvc_log_console_lines: 1000
pvc_api_listen_address: "0.0.0.0"
pvc_api_listen_port: "7370"
pvc_api_enable_authentication: False
pvc_api_secret_key: ""
pvc_api_tokens:
- description: "myuser"
token: ""
pvc_api_enable_ssl: False
pvc_api_ssl_cert: >
-----BEGIN CERTIFICATE-----
MIIxxx
-----END CERTIFICATE-----
pvc_api_ssl_key: >
-----BEGIN PRIVATE KEY-----
MIIxxx
-----END PRIVATE KEY-----
pvc_ceph_storage_secret_uuid: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
pvc_dns_database_name: "pvcdns"
pvc_dns_database_user: "pvcdns"
pvc_dns_database_password: "xxxxxxxx"
pvc_replication_database_user: "replicator"
pvc_replication_database_password: "xxxxxxxx"
pvc_superuser_database_user: "postgres"
pvc_superuser_database_password: "xxxxxxxx"
pvc_asn: "65500"
pvc_routers:
- "192.168.100.1"
pvc_nodes:
- hostname: "pvchv1"
is_coordinator: yes
node_id: 1
router_id: "192.168.100.11"
upstream_ip: "192.168.100.11"
upstream_cidr: 24
cluster_ip: "10.0.0.1"
cluster_cidr: 24
storage_ip: "10.0.1.1"
storage_cidr: 24
ipmi_host: "pvchv1-lom.{{ local_domain }}"
ipmi_user: "{{ username_ipmi_host }}"
ipmi_password: "{{ passwd_ipmi_host }}"
- hostname: "pvchv2"
is_coordinator: yes
node_id: 2
router_id: "192.168.100.12"
upstream_ip: "192.168.100.12"
upstream_cidr: 24
cluster_ip: "10.0.0.2"
cluster_cidr: 24
storage_ip: "10.0.1.2"
storage_cidr: 24
ipmi_host: "pvchv2-lom.{{ local_domain }}"
ipmi_user: "{{ username_ipmi_host }}"
ipmi_password: "{{ passwd_ipmi_host }}"
- hostname: "pvchv3"
is_coordinator: yes
node_id: 3
router_id: "192.168.100.13"
upstream_ip: "192.168.100.13"
upstream_cidr: 24
cluster_ip: "10.0.0.3"
cluster_cidr: 24
storage_ip: "10.0.1.3"
storage_cidr: 24
ipmi_host: "pvchv3-lom.{{ local_domain }}"
ipmi_user: "{{ username_ipmi_host }}"
ipmi_password: "{{ passwd_ipmi_host }}"
pvc_upstream_device: "{{ networks['upstream']['device'] }}"
pvc_upstream_mtu: "{{ networks['upstream']['mtu'] }}"
pvc_upstream_domain: "{{ networks['upstream']['domain'] }}"
pvc_upstream_subnet: "{{ networks['upstream']['subnet'] }}"
pvc_upstream_floatingip: "{{ networks['upstream']['floating_ip'] }}"
pvc_upstream_gatewayip: "{{ networks['upstream']['gateway_ip'] }}"
pvc_cluster_device: "{{ networks['cluster']['device'] }}"
pvc_cluster_mtu: "{{ networks['cluster']['mtu'] }}"
pvc_cluster_domain: "{{ networks['cluster']['domain'] }}"
pvc_cluster_subnet: "{{ networks['cluster']['subnet'] }}"
pvc_cluster_floatingip: "{{ networks['cluster']['floating_ip'] }}"
pvc_storage_device: "{{ networks['storage']['device'] }}"
pvc_storage_mtu: "{{ networks['storage']['mtu'] }}"
pvc_storage_domain: "{{ networks['storage']['domain'] }}"
pvc_storage_subnet: "{{ networks['storage']['subnet'] }}"
pvc_storage_floatingip: "{{ networks['storage']['floating_ip'] }}"
```
2019-07-28 18:26:08 -04:00
#### `pvc_log_to_file`
* *required*
Whether to log PVC output to the file `/var/log/pvc/pvc.log`. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_log_to_stdout`
* *required*
Whether to log PVC output to stdout, i.e. `journald`. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_log_colours`
* *required*
Whether to include ANSI coloured prompts (`>>>`) for status in the log output. Must be one of, unquoted: `True`, `False`.
Requires `journalctl -o cat` or file logging in order to be visible and useful.
If set to False, the prompts will instead be text values.
2019-07-28 18:26:08 -04:00
#### `pvc_log_dates`
* *required*
Whether to include dates in the log output. Must be one of, unquoted: `True`, `False`.
Requires `journalctl -o cat` or file logging in order to be visible and useful (and not clutter the logs with duplicate dates).
2019-07-28 18:26:08 -04:00
#### `pvc_log_keepalives`
* *required*
Whether to log keepalive messages. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_log_keepalive_cluster_details`
* *required*
* *ignored* if `pvc_log_keepalives` is `False`
Whether to log cluster and node details during keepalive messages. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_log_keepalive_storage_details`
* *required*
* *ignored* if `pvc_log_keepalives` is `False`
Whether to log storage cluster details during keepalive messages. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_log_console_lines`
* *required*
The number of output console lines to log for each VM.
2019-07-28 18:26:08 -04:00
#### `pvc_api_listen_address`
* *required*
Address for the API to listen on; `0.0.0.0` indicates all interfaces.
2019-07-28 18:26:08 -04:00
#### `pvc_api_listen_port`
* *required*
Port for the API to listen on.
2019-07-28 18:26:08 -04:00
#### `pvc_api_enable_authentication`
* *required*
Whether to enable authentication on the API. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_api_secret_key`
* *required*
A secret key used to sign and encrypt API Flask cookies.
Generate using `uuidgen` or `pwgen -s 32` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `pvc_api_tokens`
* *required*
2019-08-08 20:36:25 -04:00
A list of API tokens that are allowed to access the PVC API. At least one should be specified. Each list element contains the following sub-elements:
2019-07-28 18:26:08 -04:00
##### `description`
* *required*
A human-readable description of the token. Not parsed anywhere, but used to make this list human-readable and identify individual tokens by their use.
2019-07-28 18:26:08 -04:00
##### `token`
* *required*
The API token.
Generate using `uuidgen` or `pwgen -s 32` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `pvc_api_enable_ssl`
* *required*
Whether to enable SSL for the PVC API. Must be one of, unquoted: `True`, `False`.
2019-07-28 18:26:08 -04:00
#### `pvc_api_ssl_cert`
* *required* if `pvc_api_enable_ssl` is `True`
The SSL certificate, in text form, for the PVC API to use.
2019-07-28 18:26:08 -04:00
#### `pvc_api_ssl_key`
* *required* if `pvc_api_enable_ssl` is `True`
The SSL private key, in text form, for the PVC API to use.
2019-07-28 18:26:08 -04:00
#### `pvc_ceph_storage_secret_uuid`
* *required*
2019-08-08 20:36:25 -04:00
The UUID for Libvirt to communicate with the Ceph storage cluster. This UUID will be used in all VM configurations for the block device.
Generate using `uuidgen`.
2019-07-28 18:26:08 -04:00
#### `pvc_dns_database_name`
* *required*
The name of the PVC DNS aggregator database.
2019-07-28 18:26:08 -04:00
#### `pvc_dns_database_user`
* *required*
The username of the PVC DNS aggregator database user.
2019-07-28 18:26:08 -04:00
#### `pvc_dns_database_password`
* *required*
The password of the PVC DNS aggregator database user.
Generate using `pwgen -s 16` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `pvc_replication_database_user`
* *required*
The username of the PVC DNS aggregator database replication user.
2019-08-08 20:36:25 -04:00
#### `pvc_replication_database_password`
* *required*
The password of the PVC DNS aggregator database replication user.
Generate using `pwgen -s 16` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `pvc_superuser_database_user`
* *required*
The username of the PVC DNS aggregator database superuser.
2019-07-28 18:26:08 -04:00
#### `pvc_superuser_database_password`
* *required*
The password of the PVC DNS aggregator database superuser.
Generate using `pwgen -s 16` and adjusting length as required.
2019-07-28 18:26:08 -04:00
#### `pvc_asn`
* *required*
The private autonomous system number used for BGP updates to upstream routers.
2019-07-28 18:26:08 -04:00
#### `pvc_routers`
A list of upstream routers to communicate BGP routes to.
2019-07-28 18:26:08 -04:00
#### `pvc_nodes`
* *required*
2019-08-08 20:36:25 -04:00
A list of all nodes in the PVC cluster and their node-specific configurations. Each node must be present in this list. Each list element contains the following sub-elements:
2019-07-28 18:26:08 -04:00
##### `hostname`
* *required*
The (short) hostname of the node.
2019-07-28 18:26:08 -04:00
##### `is_coordinator`
* *required*
Whether the node is a coordinator. Must be one of, unquoted: `yes`, `no`.
2019-07-28 18:26:08 -04:00
##### `node_id`
* *required*
The ID number of the node. Should normally match the number suffix of the `hostname`.
2019-07-28 18:26:08 -04:00
##### `router_id`
* *required*
The BGP router-id value for upstream route exchange. Should normally match the `upstream_ip`.
2019-07-28 18:26:08 -04:00
##### `upstream_ip`
* *required*
The non-CIDR IP address of the node in the `upstream` network.
2019-07-28 18:26:08 -04:00
##### `upstream_cidr`
* *required*
The CIDR bit mask of the node `upstream_ip` address. Must match the `upstream` network.
2019-07-28 18:26:08 -04:00
##### `cluster_ip`
* *required*
The non-CIDR IP address of the node in the `cluster` network.
2019-07-28 18:26:08 -04:00
##### `cluster_cidr`
* *required*
The CIDR bit mask of the node `cluster_ip` address. Must match the `cluster` network.
2019-07-28 18:26:08 -04:00
##### `storage_ip`
* *required*
The non-CIDR IP address of the node in the `storage` network.
2019-07-28 18:26:08 -04:00
##### `storage_cidr`
* *required*
The CIDR bit mask of the node `storage_ip` address. Must match the `storage` network.
2019-07-28 18:26:08 -04:00
##### `ipmi_host`
* *required*
The IPMI hostname or non-CIDR IP address of the node management controller. Must be reachable by all nodes.
2019-07-28 18:26:08 -04:00
##### `ipmi_user`
* *required*
The IPMI username for the node management controller. Unless a per-host override is required, should usually use the previously-configured global `username_ipmi_host`. All notes from that entry apply.
2019-07-28 18:26:08 -04:00
##### `ipmi_password`
* *required*
The IPMI password for the node management controller. Unless a per-host override is required, should usually use the previously-configured global `passwordname_ipmi_host`. All notes from that entry apply.
2019-07-28 18:26:08 -04:00
#### `pvc_<network>_*`
2019-08-08 20:36:25 -04:00
The next set of entries is hard-coded to use the values from the global `networks` list. It should not need to be changed under most circumstances. Refer to the previous sections for specific notes about each entry.