From 6e9081f8c31aadf5eca4ecd258108fdb410a3c18 Mon Sep 17 00:00:00 2001 From: "Joshua M. Boniface" Date: Fri, 13 Nov 2020 01:30:02 -0500 Subject: [PATCH] Correct spelling mistakes --- docs/about.md | 14 +++++++------- docs/cluster-architecture.md | 4 ++-- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/about.md b/docs/about.md index 3efb9b44..e5dad2a0 100644 --- a/docs/about.md +++ b/docs/about.md @@ -27,11 +27,11 @@ However, the current state of this ecosystem is lacking. At present there are 3 At the high end of the open-source ecosystem, are the "Stacks": OpenStack, CloudStack, and their numerous "vendorware" derivatives. These are large, unwieldy projects with dozens or hundreds of pieces of software to deploy in production, and can often require a large team just to understand and manage them. They're great if you're a large enterprise, building a public cloud, or have a team to get you going. But if you just want to run a small- to medium-sized virtual cluster for your SMB or ISP, they're definitely overkill and will cause you more headaches than they will solve long-term. -At the low end of the open source ecosystem, are what I call the "traditional tools". The biggest name in this space is ProxMox, though other, mostly defunct projects like Ganeti, tangental projects like Corosync/Pacemaker, and even traditional "I just use scripts" methods fit as well. These projecs are great if you want to run a small server or homelab, but they quickly get unwieldy, though for the opposite reason from the Stacks: they're too simplistic, designed around single-host models, and when they provide redundancy at all it is often haphazard and nowhere near production-grade. +At the low end of the open source ecosystem, are what I call the "traditional tools". The biggest name in this space is ProxMox, though other, mostly defunct projects like Ganeti, tangential projects like Corosync/Pacemaker, and even traditional "I just use scripts" methods fit as well. These projects are great if you want to run a small server or homelab, but they quickly get unwieldy, though for the opposite reason from the Stacks: they're too simplistic, designed around single-host models, and when they provide redundancy at all it is often haphazard and nowhere near production-grade. -Finally, the proprietary solutions like VMWare and Nutanix have entrenched themselves in the industry. They're excellent pieces of software providing just about anything you would need, but this comes at a signficant cost, both in terms of money and also in software freedom and vendor lock-in. The licensing costs of Nutanix for instance can often make even enterprise-grade customers' accountants' heads spin. +Finally, the proprietary solutions like VMWare and Nutanix have entrenched themselves in the industry. They're excellent pieces of software providing just about anything you would need, but this comes at a significant cost, both in terms of money and also in software freedom and vendor lock-in. The licensing costs of Nutanix for instance can often make even enterprise-grade customers' accountants' heads spin. -PVC seeks to bridge the gaps between these 3 categories. It is fully Free Software like the first two categories, and even more so - PVC is committed to never be "open-core" software and to never hide a single feature behind a paywall; it is able to scale from very small (1 or 3 node) clusters up to a dozen or more nodes, bridging the first two categories as effortlessly as the third does; it makes use of a hyperconverged architecture like ProxMox or Nuntanix to avoid wasting hardware resources on dedicated controller, hypervisor, and storage nodes; it is redundant at every layer from the ground-up, something that is not designed into any other free solution, and is able to tolerate the loss any single disk or entire node with barely a blip, all without administrator intervention; and finally, it is designed to be as simple to use as possible, with an Ansible-based node management framework, a RESTful API client interface, and a consistent, self-documenting CLI administraton tool, allowing an administrator to create and manage their cluster quickly and simply, and then get on with more interesting things. +PVC seeks to bridge the gaps between these 3 categories. It is fully Free Software like the first two categories, and even more so - PVC is committed to never be "open-core" software and to never hide a single feature behind a paywall; it is able to scale from very small (1 or 3 node) clusters up to a dozen or more nodes, bridging the first two categories as effortlessly as the third does; it makes use of a hyperconverged architecture like ProxMox or Nuntanix to avoid wasting hardware resources on dedicated controller, hypervisor, and storage nodes; it is redundant at every layer from the ground-up, something that is not designed into any other free solution, and is able to tolerate the loss any single disk or entire node with barely a blip, all without administrator intervention; and finally, it is designed to be as simple to use as possible, with an Ansible-based node management framework, a RESTful API client interface, and a consistent, self-documenting CLI administration tool, allowing an administrator to create and manage their cluster quickly and simply, and then get on with more interesting things. In short, it is a Free Software, scalable, redundant, self-healing, and self-managing private cloud solution designed with administrator simplicity in mind. @@ -81,7 +81,7 @@ The API client uses a dedicated set of Python libraries, packaged as the `pvc-da The CLI client is a Python Click application, which provides a convenient CLI interface to the API client. It supports connecting to multiple clusters from a single instance, with or without authentication and over both HTTP or HTTPS, including a special "local" cluster if the client determines that an API configuration exists on the local host. Information about the configured clusters is stored in a local JSON document, and a default cluster can be set with an environment variable. -The CLI client is self-documenting using the `-h`/`--help` arguments thoughout, easing the administrator learning curve and providing easy access to command details. A short manual can also be found at the [CLI manual page](/manuals/cli). +The CLI client is self-documenting using the `-h`/`--help` arguments throughout, easing the administrator learning curve and providing easy access to command details. A short manual can also be found at the [CLI manual page](/manuals/cli). ## Deployment @@ -109,11 +109,11 @@ PVC might be right for you if: 2. You want management of storage and networking (a.k.a. "batteries-included") in the same tool. 3. You want hypervisor-level redundancy, able to tolerate hypervisor downtime seamlessly, for all elements of the stack. -I built PVC for my homelab first, found a perfect usecase with my employer, and think it might be useful to you too. +I built PVC for my homelab first, found a perfect use-case with my employer, and think it might be useful to you too. #### Is 3 hypervisors really the minimum? -For a redundant cluster, yes. PVC requires a majority quorum for proper operation at various levels, and the smallest possible majority quorum is 2-of-3; thus 3 nodes is the safe minimum. That said, you can run PVC on a single node for testing/lab purposes without host-level reundancy, should you wish to do so, and it might also be possible to run 2 "main" systems with a 3rd "quorum observer" hosting only the management tools but no VMs, however this is not officially supported. +For a redundant cluster, yes. PVC requires a majority quorum for proper operation at various levels, and the smallest possible majority quorum is 2-of-3; thus 3 nodes is the safe minimum. That said, you can run PVC on a single node for testing/lab purposes without host-level redundancy, should you wish to do so, and it might also be possible to run 2 "main" systems with a 3rd "quorum observer" hosting only the management tools but no VMs, however this is not officially supported. ### Feature Questions @@ -137,7 +137,7 @@ You can, but you won't like the results. SSDs, and specifically datacentre-grade #### What network speed does PVC require? -For optimal performance, nodes should use at least 10-Gigabit Ethernet network interfaces wherever possible, and on large clusters a dedicated 10-Gigabit "storage" network, separate from the "upstream"/"cluster" networks, is strongly recommended. The storage system performance, especially for writes, is more heavily bottlenecked by the network speed than the actual storage device speed when speaking of high-performance disks. 1-Gigabit Ethernet will be sufficient for some usecases and is sufficient for the non-storage networks (VM traffic notwithstanding), but storage performance will become severly limited as the cluster grows. Even slower network speeds (e.g. 100-Megabit) are not sufficient for PVC to operate properly except in very limited testing scenarios. +For optimal performance, nodes should use at least 10-Gigabit Ethernet network interfaces wherever possible, and on large clusters a dedicated 10-Gigabit "storage" network, separate from the "upstream"/"cluster" networks, is strongly recommended. The storage system performance, especially for writes, is more heavily bottlenecked by the network speed than the actual storage device speed when speaking of high-performance disks. 1-Gigabit Ethernet will be sufficient for some use-cases and is sufficient for the non-storage networks (VM traffic notwithstanding), but storage performance will become severely limited as the cluster grows. Even slower network speeds (e.g. 100-Megabit) are not sufficient for PVC to operate properly except in very limited testing scenarios. #### What Ceph version does PVC use? diff --git a/docs/cluster-architecture.md b/docs/cluster-architecture.md index 449b64da..2ba71623 100644 --- a/docs/cluster-architecture.md +++ b/docs/cluster-architecture.md @@ -152,7 +152,7 @@ The floating IP address in the storage network can be used as a single point of Nodes in this network are generally assigned IPs automatically based on their node number (e.g. node1 at `.1`, node2 at `.2`, etc.). The network should be large enough to include all nodes sequentially. -The administrator may choose to collocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceived ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth. If the administrator does choose to colocate these networks, they may also share the same IP address, thus eliminating any distinction between the Cluster and Storage networks. The PVC software handles this natively when the Cluster and Storage IPs of a node are identical. +The administrator may choose to collocate the storage network on the same physical interface as the cluster network, or on a separate physical interface. This should be decided based on the size of the cluster and the perceived ratios of client network versus storage traffic. In large (>3 node) or storage-intensive clusters, this network should generally be a separate set of fast physical interfaces, separate from both the upstream and cluster networks, in order to maximize and isolate the storage bandwidth. If the administrator does choose to collocate these networks, they may also share the same IP address, thus eliminating any distinction between the Cluster and Storage networks. The PVC software handles this natively when the Cluster and Storage IPs of a node are identical. ### PVC client networks @@ -162,7 +162,7 @@ The first type of client network is the unmanaged bridged network. These network With this client network type, PVC does no management of the network. This is left entirely to the administrator. It requires switch support and the configuration of the vLANs on the switchports of each node's physical interfaces before enabling the network. -Generally, the same physical network interface will underly both the cluster networks as well as bridged client networks. PVC does however support specifying a separate physical device for bridged client networks, for instance to separate these networks onto a different physical interface from the main cluster networks. +Generally, the same physical network interface will underlay both the cluster networks as well as bridged client networks. PVC does however support specifying a separate physical device for bridged client networks, for instance to separate these networks onto a different physical interface from the main cluster networks. #### VXLAN (managed) Client Networks