Improve formatting

This commit is contained in:
Joshua Boniface 2024-12-27 21:08:17 -05:00
parent a782113d44
commit c26034381b
1 changed files with 30 additions and 30 deletions

View File

@ -30,7 +30,7 @@ You will also need a switch to connect the nodes, capable of vLAN trunks passing
### Node Procurement
0. Select your physical nodes. Some examples are outlined in the Cluster Architecture documentation linked above. For purposes of this guide, we will be using a set of 3 Dell PowerEdge R430 servers.
Select your physical nodes. Some examples are outlined in the Cluster Architecture documentation linked above. For purposes of this guide, we will be using a set of 3 Dell PowerEdge R430 servers.
📝 **NOTE** This example selection sets some definitions below. For instance, we will refer to the "iDRAC" rather than using any other term for the integrated lights-out management/IPMI system, for clarity and consistency going forward. Adjust this to your cluster accordingly.
@ -42,7 +42,7 @@ You will also need a switch to connect the nodes, capable of vLAN trunks passing
0. If applicable to your systems, create any hardware RAID arrays for system disks now. As outlined in the Cluster Architecture documentation, the system disk of a PVC system should be a hardware RAID-1 of two relatively low-capacity SSDs (120-240GB). In our example we only use a single system SSD disk, and even in production this may be fine, but will cause the loss of a node should the disk ever fail.
0. Ensure that all data OSD disks are set to "non-RAID" mode, i.e. direct host pass-through. These disks should be exposed directly to the operating system unmolested.
0. Ensure that all data OSD disks are set to "non-RAID" mode, also known as "IT mode" or direct host pass-through. These disks should be exposed directly to the operating system unmolested.
📝 **NOTE** Some RAID controllers, for instance HP "Smart" Array controllers, do not permit direct pass-through. While we do not recommend using such systems for PVC, you can work around this by creating single-disk RAID-0 volumes, though be aware that doing so will result in missing SMART data for the disks and potential instability. As outlined in the architecture documentation, avoid such systems if at all possible!
@ -933,9 +933,9 @@ Of special note is the `pvc_nodes` section. This must contain a listing of all n
0. If you are using managed networks, on the upstream router, configure one of:
a) A BGP neighbour relationship with the cluster upstream floating address to automatically learn routes.
a. A BGP neighbour relationship with the cluster upstream floating address to automatically learn routes.
b) Static routes for the configured client IP networks towards the cluster upstream floating address.
a. Static routes for the configured client IP networks towards the cluster upstream floating address.
0. On the upstream router, if required, configure NAT for the managed client networks.