Update README to match other repositories
This commit is contained in:
parent
463c1985d3
commit
9fe3e438ec
58
README.md
58
README.md
|
@ -1,16 +1,42 @@
|
|||
# PVC Ansible
|
||||
<p align="center">
|
||||
<img alt="Logo banner" src="https://docs.parallelvirtualcluster.org/en/latest/images/pvc_logo_black.png"/>
|
||||
<br/><br/>
|
||||
<a href="https://www.parallelvirtualcluster.org"><img alt="Website" src="https://img.shields.io/badge/Website-www.parallelvirtualcluster.org-blue"/></a>
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc"><img alt="License" src="https://img.shields.io/github/license/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://github.com/psf/black"><img alt="Code style: Black" src="https://img.shields.io/badge/code%20style-black-000000.svg"/></a>
|
||||
<a href="https://github.com/parallelvirtualcluster/pvc/releases"><img alt="Latest Release" src="https://img.shields.io/github/release-pre/parallelvirtualcluster/pvc"/></a>
|
||||
<a href="https://docs.parallelvirtualcluster.org/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/parallelvirtualcluster/badge/?version=latest"/></a>
|
||||
</p>
|
||||
|
||||
**NOTICE FOR GITHUB**: This repository is a read-only mirror of the PVC repositories from my personal GitLab instance. Pull requests submitted here will not be merged. Issues submitted here will however be treated as authoritative.
|
||||
## What is PVC?
|
||||
|
||||
A set of Ansible roles to set up PVC nodes. Part of the [Parallel Virtual Cluster system](https://github.com/parallelvirtualcluster/pvc).
|
||||
PVC is a Linux KVM-based hyperconverged infrastructure (HCI) virtualization cluster solution that is fully Free Software, scalable, redundant, self-healing, self-managing, and designed for administrator simplicity. It is an alternative to other HCI solutions such as Ganeti, Harvester, Nutanix, and VMWare, as well as to other common virtualization stacks such as ProxMox and OpenStack.
|
||||
|
||||
PVC is a complete HCI solution, built from well-known and well-trusted Free Software tools, to assist an administrator in creating and managing a cluster of servers to run virtual machines, as well as self-managing several important aspects including storage failover, node failure and recovery, virtual machine failure and recovery, and network plumbing. It is designed to act consistently, reliably, and unobtrusively, letting the administrator concentrate on more important things.
|
||||
|
||||
PVC is highly scalable. From a minimum (production) node count of 3, up to 12 or more, and supporting many dozens of VMs, PVC scales along with your workload and requirements. Deploy a cluster once and grow it as your needs expand.
|
||||
|
||||
As a consequence of its features, PVC makes administrating very high-uptime VMs extremely easy, featuring VM live migration, built-in always-enabled shared storage with transparent multi-node replication, and consistent network plumbing throughout the cluster. Nodes can also be seamlessly removed from or added to service, with zero VM downtime, to facilitate maintenance, upgrades, or other work.
|
||||
|
||||
PVC also features an optional, fully customizable VM provisioning framework, designed to automate and simplify VM deployments using custom provisioning profiles, scripts, and CloudInit userdata API support.
|
||||
|
||||
Installation of PVC is accomplished by two main components: a [Node installer ISO](https://github.com/parallelvirtualcluster/pvc-installer) which creates on-demand installer ISOs, and an [Ansible role framework](https://github.com/parallelvirtualcluster/pvc-ansible) to configure, bootstrap, and administrate the nodes. Installation can also be fully automated with a companion [cluster bootstrapping system](https://github.com/parallelvirtualcluster/pvc-bootstrap). Once up, the cluster is managed via an HTTP REST API, accessible via a Python Click CLI client ~~or WebUI~~ (eventually).
|
||||
|
||||
Just give it physical servers, and it will run your VMs without you having to think about it, all in just an hour or two of setup time.
|
||||
|
||||
More information about PVC, its motivations, the hardware requirements, and setting up and managing a cluster [can be found over at our docs page](https://docs.parallelvirtualcluster.org).
|
||||
|
||||
# PVC Ansible Management Framework
|
||||
|
||||
This repository contains a set of Ansible roles for setting up and managing PVC nodes.
|
||||
|
||||
Tested on Ansible 2.2 through 2.10; it is not guaranteed to work properly on older or newer versions.
|
||||
|
||||
## Roles
|
||||
# Roles
|
||||
|
||||
This repository contains two roles:
|
||||
|
||||
#### base
|
||||
### base
|
||||
|
||||
This role provides a standardized and configured base system for PVC. This role expects that
|
||||
the system was installed via the PVC installer ISO, which results in a Debian Buster system.
|
||||
|
@ -18,18 +44,18 @@ the system was installed via the PVC installer ISO, which results in a Debian Bu
|
|||
This role is optional; the administrator may configure the base system however they please so
|
||||
long as the `pvc` role can be installed thereafter.
|
||||
|
||||
#### pvc
|
||||
### pvc
|
||||
|
||||
This role configures the various subsystems required by PVC, including Ceph, Libvirt, Zookeeper,
|
||||
FRR, and Patroni, as well as the main PVC components themselves.
|
||||
|
||||
## Variables
|
||||
# Variables
|
||||
|
||||
A default example set of configuration variables can be found in `group_vars/default/`.
|
||||
|
||||
A full explanation of all variables can be found in [the manual](https://parallelvirtualcluster.readthedocs.io/en/latest/manuals/ansible/).
|
||||
|
||||
## Using
|
||||
# Using
|
||||
|
||||
*NOTE:* These roles expect a Debian 12.X (Bookworm) system specifically (as of PVC 0.9.100).
|
||||
This is currently the only operating environment supported for PVC. This role MAY work
|
||||
|
@ -52,19 +78,3 @@ For full details, please see the general [PVC install documentation](https://par
|
|||
0. Run the `pvc.yml` playbook against the servers. If this is the very first run for a given
|
||||
cluster, use the `-e do_bootstrap=yes` variable to ensure the Ceph, Patroni, and PVC
|
||||
clusters are initialized.
|
||||
|
||||
## License
|
||||
|
||||
Copyright (C) 2018-2021 Joshua M. Boniface <joshua@boniface.me>
|
||||
|
||||
This repository, and all contained files, is free software: you can
|
||||
redistribute it and/or modify it under the terms of the GNU General
|
||||
Public License as published by the Free Software Foundation, version 3.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
|
|
@ -60,72 +60,80 @@ ipmi:
|
|||
password: "{{ root_password }}"
|
||||
pvc:
|
||||
username: "host"
|
||||
password: ""
|
||||
password: "" # Set a random password here
|
||||
# > use pwgen to generate
|
||||
hosts:
|
||||
"pvchv1": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: pvchv1-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
"hv1": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: hv1-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
# PVC connects to this *hostname* for fencing.
|
||||
address: 192.168.100.101
|
||||
address: 10.100.0.101 # The IPMI address should usually be in the "upstream" network, but can be routed if required
|
||||
netmask: 255.255.255.0
|
||||
gateway: 192.168.100.1
|
||||
channel: 1 # Optional: defaults to "1" if not set
|
||||
"pvchv2": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: pvchv2-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
gateway: 10.100.0.254
|
||||
channel: 1 # Optional: defaults to "1" if not set; defines the IPMI LAN channel which is usually 1
|
||||
"hv2": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: hv2-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
# PVC connects to this *hostname* for fencing.
|
||||
address: 192.168.100.102
|
||||
netmask: 255.255.255.0
|
||||
gateway: 192.168.100.1
|
||||
channel: 1 # Optional: defaults to "1" if not set
|
||||
"pvchv3": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: pvchv3-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
channel: 1 # Optional: defaults to "1" if not set; defines the IPMI LAN channel which is usually 1
|
||||
"hv3": # This name MUST match the Ansible inventory_hostname's first portion, i.e. "inventory_hostname.split('.')[0]"
|
||||
hostname: hv3-lom # A valid short name (e.g. from /etc/hosts) or an FQDN must be used here and it must resolve to address.
|
||||
# PVC connects to this *hostname* for fencing.
|
||||
address: 192.168.100.103
|
||||
netmask: 255.255.255.0
|
||||
gateway: 192.168.100.1
|
||||
channel: 1 # Optional: defaults to "1" if not set
|
||||
channel: 1 # Optional: defaults to "1" if not set; defines the IPMI LAN channel which is usually 1
|
||||
|
||||
# IPMI user configuration
|
||||
# > Adjust this based on the specific hardware you are using; the cluster_hardware variable is
|
||||
# used as the key in this dictionary.
|
||||
# > If you run multiple clusters with different hardware, it may be prudent to move this to an
|
||||
# 'all' group_vars file instead.
|
||||
ipmi_user_configuration:
|
||||
"default":
|
||||
channel: 1
|
||||
admin:
|
||||
id: 1
|
||||
role: 0x4 # ADMINISTRATOR
|
||||
username: "{{ ipmi['users']['admin']['username'] }}"
|
||||
password: "{{ ipmi['users']['admin']['password'] }}"
|
||||
pvc:
|
||||
id: 2
|
||||
role: 0x4 # ADMINISTRATOR
|
||||
channel: 1 # The IPMI user channel, usually 1
|
||||
admin: # Configuration for the Admin user
|
||||
id: 1 # The user ID, usually 1 for the Admin user
|
||||
role: 0x4 # ADMINISTRATOR privileges
|
||||
username: "{{ ipmi['users']['admin']['username'] }}" # Loaded from the above section
|
||||
password: "{{ ipmi['users']['admin']['password'] }}" # Loaded from the above section
|
||||
pvc: # Configuration for the PVC user
|
||||
id: 2 # The user ID, usually 2 for the PVC user
|
||||
role: 0x4 # ADMINISTRATOR privileges
|
||||
username: "{{ ipmi['users']['pvc']['username'] }}"
|
||||
password: "{{ ipmi['users']['pvc']['password'] }}"
|
||||
|
||||
# Log rotation configuration
|
||||
# > The defaults here are usually sufficient and should not need to be changed without good reason
|
||||
logrotate_keepcount: 7
|
||||
logrotate_interval: daily
|
||||
|
||||
# Root email name (usually "root")
|
||||
# > Can be used to send email destined for the root user (e.g. cron reports) to a real email address if desired
|
||||
username_email_root: root
|
||||
|
||||
# Hosts entries
|
||||
# > Define any static `/etc/hosts` entries here; the provided example shows the format but should be removed
|
||||
hosts:
|
||||
- name: test
|
||||
ip: 127.0.0.1
|
||||
ip: 1.2.3.4
|
||||
|
||||
# Administrative shell users for the cluster
|
||||
# > These users will be permitted SSH access to the cluster, with the user created automatically and its
|
||||
# SSH public keys set based on the provided lists. In addition, all keys will be allowed access to the
|
||||
# Ansible deploy user for managing the cluster
|
||||
admin_users:
|
||||
- name: "myuser"
|
||||
uid: 500
|
||||
- name: "myuser" # Set the username
|
||||
uid: 500 # Set the UID; the first admin user should be 500, then 501, 502, etc.
|
||||
keys:
|
||||
# These SSH public keys will be added if missing
|
||||
- "ssh-ed25519 MyKey 2019-06"
|
||||
removed:
|
||||
# These SSH public keys will be removed if present
|
||||
- "ssh-ed25519 ObsoleteKey 2017-01"
|
||||
|
||||
# Backup user SSH user keys, for remote backups separate from administrative users (e.g. rsync)
|
||||
# > Uncomment to activate this functionality.
|
||||
# > Useful for tools like BackupPC (the authors preferred backup tool) or remote rsync backups.
|
||||
#backup_keys:
|
||||
# - "ssh-ed25519 MyKey 2019-06"
|
||||
|
||||
|
@ -135,11 +143,9 @@ admin_users:
|
|||
# > Three names are reserved for the PVC-specific interfaces: upstream, cluster, and storage; others
|
||||
# may be used at will to describe the other devices. These devices have IP info which is then written
|
||||
# into `pvc.conf`.
|
||||
# > All devices should be using the newer device name format (i.e. enp1s0f0 instead of eth0).
|
||||
# > Usually, the Upstream network provides Internet connectivity for nodes in the cluster, and all
|
||||
# nodes are part of it regardless of function for this reason; an optional, advanced, configuration
|
||||
# will have only coordinators in the upstream network, however this configuration is out of the scope
|
||||
# of this role.
|
||||
# > All devices should be using the predictable device name format (i.e. enp1s0f0 instead of eth0). If
|
||||
# you do not know these names, consult the manual of your selected node hardware, or boot a Linux
|
||||
# LiveCD to see the generated interface configuration.
|
||||
# > This example configuration is one the author uses frequently, to demonstrate all possible options.
|
||||
# First, two base NIC devices are set with some custom ethtool options; these are optional of course.
|
||||
# The "timing" value for a "custom_options" entry must be "pre" or "post". The command can include $IFACE
|
||||
|
@ -151,6 +157,7 @@ networks:
|
|||
enp1s0f0:
|
||||
device: enp1s0f0
|
||||
type: nic
|
||||
mtu: 9000 # Forms a post-up ip link set $IFACE mtu statement; a high MTU is recommended for optimal backend network performance
|
||||
custom_options:
|
||||
- timing: pre # Forms a pre-up statement
|
||||
command: ethtool -K $IFACE rx-gro-hw off
|
||||
|
@ -159,6 +166,7 @@ networks:
|
|||
enp1s0f1:
|
||||
device: enp1s0f1
|
||||
type: nic
|
||||
mtu: 9000 # Forms a post-up ip link set $IFACE mtu statement; a high MTU is recommended for optimal backend network performance
|
||||
custom_options:
|
||||
- timing: pre # Forms a pre-up statement
|
||||
command: ethtool -K $IFACE rx-gro-hw off
|
||||
|
@ -167,36 +175,36 @@ networks:
|
|||
bond0:
|
||||
device: bond0
|
||||
type: bond
|
||||
bond_mode: 802.3ad
|
||||
bond_mode: 802.3ad # Can also be active-backup for active-passive failover, but LACP is advised
|
||||
bond_devices:
|
||||
- enp1s0f0
|
||||
- enp1s0f1
|
||||
mtu: 9000 # Forms a post-up ip link set $IFACE mtu statement
|
||||
mtu: 9000 # Forms a post-up ip link set $IFACE mtu statement; a high MTU is recommended for optimal backend network performance
|
||||
upstream:
|
||||
device: vlan1000
|
||||
type: vlan
|
||||
raw_device: bond0
|
||||
mtu: 1500 # Use a lower MTU on upstream for compatibility
|
||||
domain: "{{ local_domain }}"
|
||||
netmask: 24
|
||||
subnet: 192.168.100.0
|
||||
floating_ip: 192.168.100.10
|
||||
gateway_ip: 192.168.100.1
|
||||
mtu: 1500 # Use a lower MTU on upstream for compatibility with upstream networks to avoid fragmentation
|
||||
domain: "{{ local_domain }}" # This should be the local_domain for the upstream network
|
||||
subnet: 10.100.0.0 # The CIDR subnet address without the netmask
|
||||
netmask: 24 # The CIDR netmask
|
||||
floating_ip: 10.100.0.250 # The floating IP used by the cluster primary coordinator; should be a high IP that won't conflict with any node IDs
|
||||
gateway_ip: 10.100.0.254 # The default gateway IP
|
||||
cluster:
|
||||
device: vlan1001
|
||||
type: vlan
|
||||
raw_device: bond0
|
||||
mtu: 9000 # Use a higher MTU on cluster for performance
|
||||
domain: pvc-cluster.local
|
||||
netmask: 24
|
||||
subnet: 10.0.0.0
|
||||
floating_ip: 10.0.0.254
|
||||
domain: pvc-cluster.local # This domain is arbitrary; using this default example is a good practice
|
||||
subnet: 10.0.0.0 # The CIDR subnet address without the netmask; this should be an UNROUTED network (no gateway)
|
||||
netmask: 24 # The CIDR netmask
|
||||
floating_ip: 10.0.0.254 # The floating IP used by the cluster primary coordinator; should be a high IP that won't conflict with any node IDs
|
||||
storage:
|
||||
device: vlan1002
|
||||
type: vlan
|
||||
raw_device: bond0
|
||||
mtu: 9000 # Use a higher MTU on cluster for performance
|
||||
domain: pvc-storage.local
|
||||
netmask: 24
|
||||
subnet: 10.0.1.0
|
||||
floating_ip: 10.0.1.254
|
||||
mtu: 9000 # Use a higher MTU on storage for performance
|
||||
domain: pvc-storage.local # This domain is arbitrary; using this default example is a good practice
|
||||
subnet: 10.0.1.0 # The CIDR subnet address without the netmask; this should be an UNROUTED network (no gateway)
|
||||
netmask: 24 # The CIDR netmask
|
||||
floating_ip: 10.0.1.254 # The floating IP used by the cluster primary coordinator; should be a high IP that won't conflict with any node IDs
|
||||
|
|
Loading…
Reference in New Issue