Formatting and typos in README
This commit is contained in:
parent
f133f85a70
commit
78bde283bd
18
README.md
18
README.md
|
@ -21,11 +21,11 @@ Your cloud, the best way; just add physical servers.
|
||||||
|
|
||||||
## Philosophical Overview
|
## Philosophical Overview
|
||||||
|
|
||||||
The current state of the private cloud as of the end of 2018 is very weak. On the one hand are the traditional tools, which let you manage a KVM cluster using scripts but requiring mounts of administrator work and manual configuration based off very rough best practices. On the other hand are the "cloud infrastructure" tools, which are either massive and unweildy, complex, and in some cases costly, or simply don't fit the traditional niche of virtualized servers.
|
The current state of the private cloud as of the end of 2018 is very weak. On the one hand are the traditional tools, which let you manage a KVM cluster using scripts but requiring large amounts of administrator work and manual configuration based off very rough best practices. On the other hand are the "cloud infrastructure" tools, which are either massive and unweildy, complex, and in some cases costly, or simply don't fit the traditional niche of virtualized servers.
|
||||||
|
|
||||||
PVC aims to be a middle option - all the features of a modern cloud, such as software-defined storage and networking, full high-availability at every layer, and simple manageability, combined with a very shallow learning curve and minimal complexity for the administrator, all while being completely Free Software-based. It adheres to four main principles, which we will outline in some detail below.
|
PVC aims to be a middle option - all the features of a modern cloud, such as software-defined storage and networking, full high-availability at every layer, and software-based management via APIs, combined with a very shallow learning curve and minimal complexity for the administrator on the CLI or WebUI, all while being completely Free Software-based. It adheres to these four main principles, which we will outline in some detail below.
|
||||||
|
|
||||||
If these goals sound like something you want out of your cloud solution, give PVC a try!
|
If these principals sound like something you want out of your cloud solution, give PVC a try!
|
||||||
|
|
||||||
#### Be Free Software Forever (or Bust)
|
#### Be Free Software Forever (or Bust)
|
||||||
|
|
||||||
|
@ -47,21 +47,21 @@ Administrator time is valuable, and every minute you spend babysitting pets is t
|
||||||
|
|
||||||
PVC is based on a semi-decentralized design with a dynamic number of fully-functional nodes. Each node in the cluster is capable, based on the configuration, of handling any cluster tasks if needed. However in a normal deployment, the first 3 or 5 servers act as cluster "coordinators", taking on a number of management roles, while other nodes connect to the coordinators for state and information. One coordinator provides additional "primary" functionality, such as DHCP services, DNS aggregation, and client network gateways/routing, and this role can pass dynamically between coordinators based on administrator intervention or automated cluster events.
|
PVC is based on a semi-decentralized design with a dynamic number of fully-functional nodes. Each node in the cluster is capable, based on the configuration, of handling any cluster tasks if needed. However in a normal deployment, the first 3 or 5 servers act as cluster "coordinators", taking on a number of management roles, while other nodes connect to the coordinators for state and information. One coordinator provides additional "primary" functionality, such as DHCP services, DNS aggregation, and client network gateways/routing, and this role can pass dynamically between coordinators based on administrator intervention or automated cluster events.
|
||||||
|
|
||||||
The coordinator nodes host a number of services, configured at bootstrap time, that are not infinitely scalable across all nodes. These include a Zookeeper cluster for state mangement, MariaDB+Galera SQL cluster for DNS, and various processes supporting the primary node. Coordinators can be replaced, added, or removed by the administrator, though by default any additional nodes are configured as non-coordinators, allowing the cluster to scale out to 100 or more hypervisors while still keeping the databases manageable. Noriceably from other clusters, these functions do not require ever more additional servers to support, and are all built in to the main PVC daemon functionality.
|
The coordinator nodes host a number of services, configured at bootstrap time, that are not infinitely scalable across all nodes. These include a Zookeeper cluster for state mangement, MariaDB+Galera SQL cluster for DNS, and various processes supporting the primary node. Coordinators can be replaced, added, or removed by the administrator, though by default any additional nodes are configured as non-coordinators, allowing the cluster to scale out to 100 or more hypervisors while still keeping the databases manageable. Noticeably compared to other cloud cluster products, these functions do not require ever more additional servers to support, and are all built in to the main PVC daemon functionality or a small set of "cluster" VMs which are installed by default at bootstrap.
|
||||||
|
|
||||||
The primary database is Zookeeper, which is used to provide the distributed and coordinated state used by the PVC cluster to determine what resources exist, where they live, and where they should run. The Zookeeper cluster is created on the initial coordinators at bootstrap time, and can scale out onto more coordinators later as required.
|
The primary database is Zookeeper, which is used to provide the distributed and coordinated state used by the PVC cluster to determine what resources exist, where they live, and when they should run. The Zookeeper cluster is created on the initial coordinators at bootstrap time, and can scale out onto more coordinators later as required.
|
||||||
|
|
||||||
The secondary database is MariaDB with the Galera multi-master functionality. This database primarily supports DNS aggregation services, providing a unified view of the cluster and its clients in DNS without additional administrator intervention. Some additional information about the provisioning state is kept in the database as an intermediate to being stored in Zookeeper.
|
The secondary database is MariaDB with the Galera multi-master functionality. This database primarily supports DNS aggregation services, providing a unified view of the cluster and its clients in DNS without additional administrator intervention. Some additional information about the provisioning state is kept in the database as an intermediate to being stored in Zookeeper.
|
||||||
|
|
||||||
PVC handles both storage and networking as software configurations defined dynamically based on data in the Zookeeper database. It makes use of BGP EVPN to provide limitless, virtual layer 2 networks for clients in the cluster, and networks are isolated by NFT firewalls, with optional DHCP and IPv6 support in client networks. Storage is provided by Ceph for reundant, replicated block devices which scales along with the cluster.
|
PVC handles both storage and networking as software configurations defined dynamically based on data in the Zookeeper database. It makes use of BGP EVPN to provide limitless, virtual layer 2 networks for clients in the cluster, and networks are isolated by NFT firewalls, with optional DHCP and IPv6 support in client networks. Storage is provided by Ceph for reundant, replicated block devices, which scales along with the cluster in both performance and size.
|
||||||
|
|
||||||
### Physical Infrastructure
|
### Physical Infrastructure
|
||||||
|
|
||||||
PVC requires a very simple physical infrastructure, 1, 3, or more physical servers connected via Ethernet on two flat L2 networks. More complicated topogies are supported during the bootstrapping phase but the simplest configuration should be sufficient for most simple, basic clusters or for learning.
|
PVC requires only a very simple physical infrastructure: 1, 3, or more physical servers connected via Ethernet on two flat L2 networks. More complicated topogies are supported during the bootstrapping phase but the simplest configuration should be sufficient for most simple, basic clusters or for learning.
|
||||||
|
|
||||||
Each node requires a single L2 network which provides the client and storage interconnections for the cluster. These roles can be separated into different L2 networks as well. These networks live entirely within the cluster and must not be shared outside the cluster or with other systems. The standard cluster and storage configuration is an RFC1918 /24 network to provide plenty of room for nodes and the supporting cluster VMs, while being scalable up to ~100 hypervisors. A special floating IP is designated in the cluster network to provide a single point of interface to the primary coordinator.
|
Each node requires a single L2 network which provides the client and storage interconnections for the cluster. These roles are separated into two distinct L3 networks, allowing them to be split onto different L2 networks if desired. These networks live entirely within the cluster and must not be shared outside the cluster or with other systems. The standard configuration is an RFC1918 /24 network for each role to provide plenty of room for nodes and the supporting cluster VMs, while being scalable up to ~100 hypervisors. A special floating IP is designated in the cluster network to provide a single point of interface to the primary coordinator.
|
||||||
|
|
||||||
Each coordinator node, but optionally all nodes, require a second L2 network which provides upstream routing into the cluster. In the simplest configuration, the coordinators are present in this network and share routes to client networks via this network, and receive outside traffic to the client networks through it. PVC provides no NAT support and no explcit firewalling to this network, so any external gateway interfaces should connect into the PVC cluster via this intermediate network for security purposes. A specifal floating IP is designated in the upstream network to provide a single point of interface to the primary coordinator, most importantly for static routing.
|
Each coordinator node, but optionally all nodes, requires a second L2 network which provides upstream routing into the cluster. In the simplest configuration, only the coordinators are present in this network and share routes to client networks and receive outside traffic to the client networks through it. PVC provides no NAT support and no explcit firewalling in from this network, so any external gateway interfaces should connect into the PVC cluster via this intermediate network for security purposes. A specifal floating IP is designated in the upstream network to provide a single point of interface to the primary coordinator, most importantly for static routing.
|
||||||
|
|
||||||
The physical hardware of the nodes depends on the target workload. Generally, at least 32GB RAM and 8 CPU cores (excluding SMT threads) is the minimum for a single node, but extremely small configurations are possible, if very limited. Note that the Ceph storage disks, PVC daemons, and, on coordinator nodes, databases and Ceph monitors, all require additional RAM and CPU power on top of the requirements of virtualized guests, so ensure that each node is tall enough for your workload and then scale out for redundancy.
|
The physical hardware of the nodes depends on the target workload. Generally, at least 32GB RAM and 8 CPU cores (excluding SMT threads) is the minimum for a single node, but extremely small configurations are possible, if very limited. Note that the Ceph storage disks, PVC daemons, and, on coordinator nodes, databases and Ceph monitors, all require additional RAM and CPU power on top of the requirements of virtualized guests, so ensure that each node is tall enough for your workload and then scale out for redundancy.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue