Improve wording in explanation

This commit is contained in:
Joshua Boniface 2024-09-23 14:09:43 -04:00
parent 7c518abd82
commit 2cc5e727e4
1 changed files with 2 additions and 2 deletions

View File

@ -21,7 +21,7 @@ Like parts 2 and 3, I'll jump right into the cluster specifications, changes to
## The Cluster Specs (even better)
Parts 1 and 2 used my own home server setup, based on Dell R430 servers using Broadwell-era Intel Xeon CPUs, for analysis; then part 3 used a more modern AMD Epyc based Dell system through my employer. In this part, we have significantly more powerful machines, featuring a 64-core high speed Epyc processor, 1TB of RAM, and 2x 100GbE ports per node. Like all previous test clusters, there are 3 nodes:
Parts 1 and 2 used my own home server setup with older components for analysis, then part 3 used more modern AMD Epyc and NVMe-based Dell systems through my employer instead. In this part, we have significantly more powerful machines, featuring a 64-core high speed Epyc processor, 1TB of RAM, and 2x 100GbE ports per node. Like all previous test clusters, there are 3 nodes:
| **Part**           | **node1 + node2 + node3** |
| :-------------------------------------------------------------- | :------------------------ |
@ -36,7 +36,7 @@ Parts 1 and 2 used my own home server setup, based on Dell R430 servers using Br
The primary hypothesis of this set of benchmarks is that there is a linear scaling of performance the more OSD processes that are added to the Ceph subsystem. In addition, a secondary hypothesis is that adding additional OSD processes per NVMe disk (i.e. splitting a single NVMe disk into several smaller "virtual" NVMe disks) will increase performance.
Based on the results of the last post, I've focused this test suite mostly on determining the levels of performance scaling and exactly how many OSDs will optimize performance on such a powerful system. CPU sets provided some very contradictory results for NVMe drives in part 3, so I have excluded them from any of the testing here, since I do not believe them to be significantly useful in most workloads. In addition, these tests were conducted on a completely empty cluster, with no VMs active, so these tests are truly of the theoretical maximum performance of the Ceph subsystem on the given hardware and nothing else.
Based on the results of the last post, I've focused this test suite on two key areas: (1) determining the levels of performance scaling across multiple OSD processes; and (2) determining exactly how many OSDs will optimize performance on such a powerful system. CPU sets provided some very contradictory results for NVMe drives in part 3, so I have excluded them from any of the testing here, since I do not believe them to be significantly useful in most workloads. In addition, these tests were conducted on a completely empty cluster, with no VMs active, so these tests are truly of the theoretical maximum performance of the Ceph subsystem on the given hardware and nothing else.
There are 3 distinct OSD configurations being tested: