Switch to new theme

This commit is contained in:
Joshua Boniface 2024-05-24 13:25:55 -04:00
parent 6739be79d9
commit b583caa8b6
82 changed files with 413 additions and 221 deletions

6
.gitmodules vendored
View File

@ -1,3 +1,3 @@
[submodule "themes/hugo-theme-m10c"]
path = themes/hugo-theme-m10c
url = https://github.com/joshuaboniface/hugo-theme-m10c.git
[submodule "themes/hugo-blog-awesome"]
path = themes/hugo-blog-awesome
url = https://github.com/hugo-sid/hugo-blog-awesome

0
.hugo_build.lock Normal file
View File

View File

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 73 KiB

View File

@ -2,7 +2,7 @@ baseurl = "https://www.boniface.me"
languageCode = "en-us"
title = "Joshua Boniface, sysadmin"
author = "Joshua Boniface"
theme = "hugo-theme-m10c"
theme = "hugo-blog-awesome"
contentdir = "content"
publishdir = "public"

View File

Before

Width:  |  Height:  |  Size: 93 KiB

After

Width:  |  Height:  |  Size: 93 KiB

View File

Before

Width:  |  Height:  |  Size: 687 KiB

After

Width:  |  Height:  |  Size: 687 KiB

View File

Before

Width:  |  Height:  |  Size: 3.4 MiB

After

Width:  |  Height:  |  Size: 3.4 MiB

View File

@ -1,13 +1,12 @@
+++
class = "post"
date = "2017-02-10T01:35:38-05:00"
tags = []
title = "Build A Raspberry Pi BMC"
type = "post"
weight = 1
draft = false
+++
---
title: "Build a Raspberry Pi BMC"
description: ""
date: 2017-02-10
tags:
- DIY
- Development
- Systems Administration
---
**NOTICE:** This project is long-since obsoleted. I never did complete it, and ended up just buying some IPMI-capable motherboards. I would receommend the various Pi-KVM solutions now available as much better, more robust replacements to this project.
@ -19,7 +18,7 @@ If you don't know what it is, the [Rapberry Pi](https://www.raspberrypi.org) is
*(Pictured: A Raspberry Pi 1 model B)*
![Raspberry Pi](/images/rpibmc/rpi-1b.jpg)
![Raspberry Pi](rpi-1b.jpg)
## The hardware
@ -45,7 +44,7 @@ The power and reset switches are a little more complex. While you can direct the
*(Pictured: the GPIO layout for a first-generation model-B Raspberry Pi)*
![GPIO pinout](/images/rpibmc/rpi-1b-gpio.png)
![GPIO pinout](rpi-1b-gpio.png)
### Serial - USB or TTL?
@ -59,7 +58,7 @@ The one downside of this method is the lack of proper VGA graphics support. Your
*(Pictured: the MAX3232 signal converter board)*
![MAX3232 Serial boards](/images/rpibmc/max3232-boards.jpg)
![MAX3232 Serial boards](max3232-boards.jpg)
### Cabling it up
@ -106,11 +105,11 @@ The finished product is a small board that keeps all the cabling neat and tidy i
*(Pictured: The finished breadboard layout)*
![The finished breadboard](/images/rpibmc/breadboard-layout.jpg)
![The finished breadboard](breadboard-layout.jpg)
*(Pictured: the cabling of the Raspberry Pi BMC)*
![The finished product](/images/rpibmc/finished-product.jpg)
![The finished product](finished-product.jpg)
## The software
@ -132,12 +131,7 @@ Finally we're able to set the host system's name (for display when logging in) v
*(Pictured: an example session with `bmc.sh`)*
![Shell example](/images/rpibmc/bmcshell-sample.png)
*(Pictured: Debian Live via the serial console)*
![Console example](/images/console.png)
![Shell example](bmcshell-sample.png)
## Conclusion
@ -146,6 +140,6 @@ I hope you've found this post interesting and useful - if you have some IPMI-les
*(Pictured: what you might have to do on a cruise ship without a BMC!)*
![No BMC fail](/images/nobmcfail.png)
![No BMC fail](txt-from-a-ship.png)
If you have any questions or comments, shoot me an e-mail, or find me on various social media!

View File

Before

Width:  |  Height:  |  Size: 109 KiB

After

Width:  |  Height:  |  Size: 109 KiB

View File

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 158 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 551 KiB

After

Width:  |  Height:  |  Size: 551 KiB

View File

@ -1,14 +1,11 @@
+++
class = "post"
date = "2018-09-28T00:35:22-04:00"
tags = ["automation","gardening"]
title = "Automating your garden hose for fun and profit"
description = "Building a custom self-hosted MQTT water shutoff and controlling it with HomeAssistant"
type = "post"
weight = 1
+++
---
title: "Automating your garden hose for fun and profit"
description: "Building a custom self-hosted MQTT water shutoff and controlling it with HomeAssistant"
date: 2018-09-28
tags:
- DIY
- Home Automation
---
I love gardening - over the last couple years it's become a great summer pasttime for me. And after a backyard revamp, I'm planning a massive flower garden to create my own little oasis.

View File

@ -1,14 +1,11 @@
+++
Categories = ["Development","Systems Administration"]
Tags = ["Development","Systems Administration"]
Description = "How to build LibreofficeOnline against stock LibreOffice on Debian Stretch"
date = "2017-07-07T12:44:53-04:00"
title = "Building LibreOffice Online for Debian"
type = "post"
weight = 1
draft = false
+++
---
title: "Building LibreOffice Online for Debian"
description: "How to build LibreofficeOnline against stock LibreOffice on Debian Stretch"
date: 2017-07-07
tags:
- Debian
- Development
---
DISCLAIMER: I never did proceed with this project beyond building the packages. I can offer no helpful support regarding getting it running.

View File

@ -1,15 +1,11 @@
+++
class = "post"
date = "2022-12-02T00:00:00-05:00"
tags = ["support", "floss", "debian", "packaging"]
title = "Building a Debian Package 101"
description = "It's not as confusing or complicated as you think"
type = "post"
weight = 1
draft = false
+++
---
title: "Building a Debian Package 101"
description: "It's not as confusing or complicated as you think"
date: 2022-12-02
tags:
- Debian
- Development
---
One of the most oft-repeated reasons I've heard for software not packaging for Debian and its derivatives it that Debian packaging is complicated. Now, the thing is, it can be. If you look at [the manual](https://www.debian.org/doc/manuals/maint-guide/index.en.html) or a reasonably complicated program from the Debian repositories, it sure seems like it is. But I'm here today to show you that it can be easy with the right guide!

View File

@ -1,15 +1,11 @@
+++
class = "post"
date = "2023-01-27T00:00:00-05:00"
tags = ["thinkpad", "trackpoint", "laptop"]
title = "Fixing a Pesky Trackpoint"
description = "Stop your mouse moving randomly on a Thinkpad while preserving the buttons"
type = "post"
weight = 1
draft = false
+++
---
title: "Fixing a Pesky Trackpoint"
description: "Stop your mouse moving randomly on a Thinkpad while preserving the buttons"
date: 2023-01-27
tags:
- DIY
- Technology
---
Today's post is a fairly short one. I've used Thinkpads for quite a while, first a T450s, then a T495s. I'm a huge fan of them, even the current generations. One thing I especially like is the button layout: because of the trackpoint (a.k.a. the "nub" mouse pointer), I get an extra set of physical buttons above my trackpad, including a middle mouse button. I find these buttons absolutely invaluable to my minute-to-minute usage of my laptop.
@ -23,14 +19,14 @@ Next, I did some searching on ways to disable just the mouse functionality while
Luckily though I was able to stumble upon [a random Arch Linux forums thread](https://bbs.archlinux.org/viewtopic.php?id=252636) where someone posted a hacky (elegant) solution to this. Specifically, post #6 from the user "k395" mentions a solution he came up with that leverages the `evtest` command (Debian package `evtest`) to capture the mouse events from the device, and then use a Perl wrapper to the `xdotool` command (Debian package `xdotool`) to manually generate the appropriate mouse button events. What a solution!
```
```shell
sudo evtest --grab /dev/input/event21 | perl -ne 'system("xdotool mouse".($2?"down ":"up ").($1-271)) if /Event:.*code (.*) \(BTN.* value (.)/'
```
*"k395"'s one-liner solution*
I had to do a bit of modification here though. First of all, I needed to determine exactly what `/dev/input/event` node was the one for my trackpoint. Luckily, running `evtest` with no arguments enters an interactive mode that lets you see what each event node maps to. Unfortunately I haven't found a way to get this programatically, but these seem to be stable across reboots so simply grabbing the correct value is sufficient for me. In my case, the node is `/dev/input/event6` for the `Elantech TrackPoint`.
```
```shell
$ sudo evtest
No device specified, trying to scan all of /dev/input/event*
Available devices:
@ -60,7 +56,7 @@ But that wasn't the only issue. Unfortunately this basic implementation lacks su
This prompted me to rewrite the Perl-based one-liner into a slightly easier to read Python version, and implemented middle button support as well. I also added a bit of debouncing to avoid very rapid presses resulting in 2 `xdotool` events in very rapid succession. I then put everything together into a script which I called `disable-trackpoint`.
```
```bash
#!/bin/bash
set -o xtrace
@ -113,7 +109,7 @@ There is one major downside here: this script does not function properly under W
Finally, I set this script to run automatically in a systemd unit file which will start it on boot and ensure it keeps trying to start until the display is initialized.
```
```systemd
[Unit]
Description = Fix trackpoint problems by disabling it
Wants = multi-user.target

View File

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 124 KiB

View File

@ -1,13 +1,12 @@
+++
class = "post"
title = "Gamifying My Drumming, or: Rock Band 3 with an Alesis Strike Pro"
description = "How I connected my electronic drums to a PS3 to play Rock Band 3, with full hi-hat support"
tags = ["diy","music"]
date = "2023-05-09T01:08:12-04:00"
type = "post"
weight = 1
+++
---
title: "Gamifying My Drumming, or: Rock Band 3 with an Alesis Strike Pro"
description: "How I connected my electronic drums to a PS3 to play Rock Band 3, with full hi-hat support"
date: 2023-05-09
tags:
- DIY
- Technology
- Music
---
## The Backstory
@ -104,20 +103,20 @@ The debugging part came in real handy as I worked to calibrate excactly what the
What would this post be without some pictures?
![Wiring of the Blackpill and MIDI Shield](/images/gamifying-my-drumming/blackpill-hat-wiring.jpg)
![Wiring of the Blackpill and MIDI Shield](blackpill-hat-wiring.jpg)
Here is a quick WIP shot of the wiring for the Blackpill and the MIDI Shield. You can see the power along the left and the various signal lines to the shield across the center. A2 and A3 are the second serial UART on the Blackpill; A5 is the button for mode control; and A6 and A7 are the LEDs for status indication. Not shown is the aforementioned heavy wire mount, which was soldered to the mechanical anchor points at the top of the board in this image. The boards are attached together with relatively thick double-sided tape to keep them solidly together while insulating them from each other.
![MIDI Rewriter module in situ, front](/images/gamifying-my-drumming/midi-rewriter-front.jpg)
![MIDI Rewriter module in situ, back](/images/gamifying-my-drumming/midi-rewriter-back.jpg)
![MIDI Rewriter module in situ, front](midi-rewriter-front.jpg)
![MIDI Rewriter module in situ, back](midi-rewriter-back.jpg)
Here are two images, front and back, of the MIDI Rewriter module in its final position with all connections. From the front, the MIDI-IN from the drum head is on the right, while the MIDI-OUT to the Pro Adapter is on the left. USB power is visible on the back, and all the cables are neatly organized using small cable ties. The two USB cables (USB-C power for the Blackpill and USB signal for the Pro Adapter) are routed over to a USB hub by the PS3 along the drum frame.
![Pro Adapter/Controller](/images/gamifying-my-drumming/pro-controller.jpg)
![Pro Adapter/Controller](pro-controller.jpg)
Here is a shot of me holding the Pro Controller. The cables are neatly routed to provide me plenty of slack to hold the controller if needed, and the MIDI cable acts as a loop to hook onto the golden-coloured 3D-printed hook attached to the side of the drum module. Also (slightly) visible underneath the drum module are my headphones that I use during "quiet hours", on another golden 3D-printed hook. This keeps everything together and nicely out of the way while I'm playing while still being accessible instantly.
![Whole Setup](/images/gamifying-my-drumming/whole-setup.jpg)
![Whole Setup](whole-setup.jpg)
And here's the entire kit setup, with the TV, speakers and PS3 (just behind the uncovered speaker) visible. The USB hub is attached to the desk just behind the Hi-Hat cymbals. The speakers are in Stereo 2x mode, with both the pair on the desk as well as a pair on the floor on either side of me (right one visible). I used coloured electrical tape to add little colour accents for the cymbals to help establish my muscle memory for the game, which took a solid week to get used to (versus the original Rock Band drums), but now I just like how it looks. The fact that the Strike Pro has 3 crashes worked out wonderfully here as I'm able to have both the normal Green Cymbal crash, along with separate "crash" versions of the Yellow and Blue cymbals for when I feel that playing authentically requires them. For toms, the rack toms are mapped as you would expect (smallest is yellow, next is blue), and the "floor" toms both are technically mapped to green but I only use the first, with the second acting as a convenient table for the remote and vocal controller. Bonus: my best result yet for Time and Motion by Rush ([a custom chart by ejthedj on C3](https://db.c3universe.com/song/time-and-motion-16247))!

View File

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 117 KiB

View File

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 136 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 168 KiB

After

Width:  |  Height:  |  Size: 168 KiB

View File

@ -0,0 +1,83 @@
+++
class = "post"
date = "2024-02-17T00:00:00-05:00"
tags = ["philosophy", "floss"]
title = "My Opinions on Free and Libre Open Source Software"
description = "Because trying to write them as replies never works"
type = "post"
weight = 1
draft = true
+++
## Why Write This?
Over the years, I've been engaged in many arguments and debates about the nature of open source, especially *vis-a-vis* funding open source. Invariably, my position is apparently unclear to others in the debate, forcing me to expend literally thousands of words clarifying minutae and defeating strawmen.
As a leader of two projects that are inherently goverened by my philosophy on Free and Libre Open Source Software (hereafter, "FLOSS"), I feel it's important to get my actual opinions out in the open and in as direct and clear a manner as I possibly can. Hence, this blog post.
## Part One: What I Believe FLOSS "means" a.k.a. "The Philosophy of FLOSS"
"FLOSS" is a term I use very specifically, because it is a term that Richard Stallman, founder of the Free Software Foundation (FSF) and writer of the GNU General-Purpose License (GPL) suggests we use.
In terms of general philosophy, I agree with Mr. Stallman on a great number of points, though I do disagree on some.
To me, "FLOSS" is about two key things, which together make up and ethos and philosophy on software development.
### FLOSS is about ensuring users have rights
This part is is pretty self-explanatory, because it's what's covered explicitly in every conception of FLOSS, from the FSF's definition, to the Open Source Initiative (OSI) definition, to the Debian Free Software Guidelines (FSG).
Personally, I adhere to the FSF and GPL's 4 freedoms, and thus I reject - for myself - non-copyleft licenses.
> “Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” We sometimes call it “libre software,” borrowing the French or Spanish word for “free” as in freedom, to show we do not mean the software is gratis.
> You may have paid money to get copies of a free program, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies.
Now, as I'll discuss below, I have some disagreements with this definition when we begin to talk about "price". But those first two sentences are what's important here.
### FLOSS is a statement of altruism
This is the part that I think, if not makes me unique, at least makes me different than most people who write and release "open source" or other FLOSS software.
I believe that FLOSS software is a statement of altruism. It is about giving something to the world, to humanity, and to the computing community.
On it's face, this doesn't seem radical, but it is, and it almost completely informs my opinions on monitization and distribution that I'll discuss below. So it's a very important point to take in: to adhere to "FLOSS philosophy" means, to me, to have altruistic motives and actions.
## Part Two: Monetizing FLOSS done Wrong with "Open-core"
With my definition of "FLOSS Philosophy" out of the way, let's discuss monetization, starting with things I see as counter to said philosophy and thus intellectually dishonest or worse.
This blog post originally started as a treatise on Open-Core software, but without the philosophical underpinning, I found it very hard to explain why I felt the way I did about it.
For anyone unaware of what this term means, "open-core" software is software that is *nominally* FLOSS, but which hides some subset of actual code features behind a proprietary license and other restrictions. For a hypothetical example, consider a grocery list software program. If the program itself is free and open source, but the ability to, say, create lists longer than 50 entries or to create lists of electronics instead of groceries, is a proprietary, paid extension, this is "open-core" software.
Open-core is one of the most pervasive FLOSS monetization options. Countless pieces of software, from GitLab to CheckMK to MongoDB, are "open-core".
And I think this model is scummy, because it fundamentally violates the second part of the philosophy. How?
1. "Open-core" software is not about altruism. Sure, it may *seem* that way because *part* of it is FLOSS. But that other part is not, and thus, the *complete* software solution is not FLOSS
2. "Open-core" software is, almost *invariably*, marketed as FLOSS, becausee the social clout of FLOSS brings in contributors and users, building an "ecosystem" that is then monitized when...
3. The lines of all pieces of "open-core" software is arbitrary. Why 50 grocery items, and not 100? Why just groceries but not electronics? Why is the line drawn there, and not somewhere else? The very existence of such a line is arbitrary, as is its positioning. Thus, the software *as a whole* is not FLOSS because of arbitrary limits on its usage.
Now, some may argue that feature X is "only for enterprises and they should pay" or something similar. This is nonsense. It is not up to the *author* to decide that, it's up to the *user*. And by presenting an arbitrary line, the philosophical idea of altruism goes out the widow. There is nothing altruistic about proprietary software, and "open-core" software is just proprietary software with FLOSS marketing.
There is one last part of "open-core" software that I find particular egregious. By its nature, "open-core" software is contrary to a volunteer ethos and community-driven development. Consinder the grocery example above and a new contributor called Jon. Jon wants to add support in for listing clothing in addition to grocery items. He wants to exend this "FLOSS" software. Will his contribution even be accepted? After all, the "FLOSS" part is just for *groceries*, and electronics are hidden behind the paywall. Will Jon's merge request languish forever, be ignored, or be outright deleted? And if it's merged to the "FLOSS" portion of the software, the line becomes even more arbitrary.
## Part Three: Monetizing FLOSS done Wrong with "CLAs"
Contributor License Agreements or CLAs are incredibly common in "corporate" FLOSS software. They're usually marketed as "protecting" the community, when in fact they do anything but. The software license protects the community; the CLA allows the company to steal contributions at an arbitrary future date by changing the license at will.
I think it should be pretty obvious to anyone who adheres to the philosophy above why this too is scummy. Contributors make contributions under a particular license, and if that license is changed in the future, particularly to a propreitary license, those contributions are stolen from the world at large and divorced from the altruistic intentions of the contributor.
Now, not every project with a CLA will necessarily switch licenses in the future. The issue with CLAs is that they give the controlling interests the *option* to do so. And for how long can those interests be trusted, especially from a profit-driven corporate entity?
## Part Three: Monetizing FLOSS done Right with Employer-sponsored FLOSS
## Part Four: My Thoughts on the Future of FLOSS

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,14 +1,11 @@
+++
class = "post"
date = "2018-09-17T1:43:25-04:00"
tags = ["devops","postgresql","patroni","haproxy"]
title = "Patroni and HAProxy Agent Checks"
description = "Using HAProxy agent checks to clean up your Patroni load balancing"
type = "post"
weight = 1
+++
---
title: "Patroni and HAProxy Agent Checks"
description: "Using HAProxy agent checks to clean up your Patroni load balancing"
date: 2018-09-17
tags:
- Development
- Systems Administration
---
[Patroni](https://github.com/zalando/patroni) is a wonderful piece of technology. In short, it [allows an administrator to configure a self-healing and self-managing replicated PostgreSQL cluster](https://patroni.readthedocs.io/en/latest/), and [quite simply at that](https://www.opsdash.com/blog/postgres-getting-started-patroni.html). With Patroni, gone are the days of having to manage your PostgreSQL replication manually, worrying about failover and failback during an outage or maintenance. Having a tool like this was paramount to supporting PostgreSQL in my own cluster, and after a lot of headaches with [repmgr](https://repmgr.org/) finding Patroni was a dream come true. If you haven't heard of it before, definitely check it out!
@ -136,7 +133,7 @@ backend mast-pgX_psql_readwrite
And here it is in action:
![HATop output](/images/patroni-haproxy/haproxy-psql-backend.png)
![HATop output](haproxy-psql-backend.png)
### Conclusion

View File

@ -1,15 +1,11 @@
+++
class = "post"
date = "2020-04-18T00:00:00-04:00"
tags = ["support", "floss"]
title = "Problems in FLOSS Projects #1 - Feature: Burden or Boon?"
description = "Why it's hard to prioritize and balance advanced features with ease of use"
type = "post"
weight = 1
draft = false
+++
---
title: "Problems in FLOSS Projects #1 - Feature: Burden or Boon?"
description: "Why it's hard to prioritize and balance advanced features with ease of use"
date: 2020-04-18
tags:
- FLOSS
- Development
---
## Welcome

View File

@ -1,15 +1,11 @@
+++
class = "post"
date = "2020-05-31T00:00:00-04:00"
tags = ["support", "floss"]
title = "Problems in FLOSS Projects #2 - Support Vampires"
description = "How to spot and deal with people draining your community's life-force"
type = "post"
weight = 1
draft = false
+++
---
title: "Problems in FLOSS Projects #2 - Support Vampires"
description: "How to spot and deal with people draining your community's life-force"
date: 2020-05-31
tags:
- FLOSS
- Development
---
## Welcome

View File

@ -1,15 +1,11 @@
+++
class = "post"
date = "2022-12-07T00:00:00-04:00"
tags = ["support", "floss"]
title = "Problems in FLOSS Projects #3 - The Development Bystander Problem"
description = "The paradoxical link between user count and new developers"
type = "post"
weight = 1
draft = false
+++
---
title: "Problems in FLOSS Projects #3 - The Development Bystander Problem"
description: "The paradoxical link between user count and new developers"
date: 2022-12-07
tags:
- FLOSS
- Development
---
## Welcome

View File

@ -1,16 +1,16 @@
+++
date = "2022-11-12T00:00:00-05:00"
tags = ["systems administration", "pvc","ceph"]
title = "Adventures in Ceph tuning, part 2"
description = "An follow-up to my analysis of Ceph system tuning for Hyperconverged Infrastructure"
type = "post"
weight = 1
draft = false
+++
---
title: "Adventures in Ceph tuning, part 2"
description: "A follow-up to my analysis of Ceph system tuning for Hyperconverged Infrastructure"
date: 2022-11-12
tags:
- PVC
- Development
- Systems Administration
---
Last year, [I made a post](https://www.boniface.me/pvc-ceph-tuning-adventures/) about Ceph storage tuning with my [my Hyperconverged Infrastructure (HCI) project PVC](https://github.com/parallelvirtualcluster/pvc), with some interesting results. At the time, I outlined how two of the nodes were a newer, more robust server configuration, but I was still stuck with one old node which was potentially throwing off my performance results and analysis. Now, I have finally acquired a 3rd server matching the spec of the other 2, bringing all 3 of my hypervisor nodes into perfect balance. Also, earlier in the year, I upgraded the CPUs of the nodes to the Intel E5-2683 V4, which provides double the cores, threads, and L3 cache than the previous 8-core E5-2620 V4's, helping further boost performance.
![Perfect Balance](/images/pvc-ceph-tuning-adventures-part-2/perfect-balance.png)
![Perfect Balance](perfect-balance.png)
With my configuration now standardized across all the nodes, I can finally revisit the performance analysis from that post and make some more useful conclusions, without mismatched CPUs getting in the way.
@ -79,7 +79,7 @@ I would expect, with Ceph's CPU-bound nature, that each increase in the number o
Sequential bandwidth tests tend to be "ideal situation" tests, not necessarily applicable to VM workloads except in very particular circumstances. However they can be useful for seeing the absolute maximum raw throughput performance that can be attained by the storage subsystem.
![Sequential Read Bandwidth (MB/s, 4M block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/seq-read.png)
![Sequential Read Bandwidth (MB/s, 4M block size, 64 queue depth)](seq-read.png)
Sequential read shows a significant spike with the all-cores configuration, then a much more consistent performance curve in the limited configurations. There is a significant difference in performance between the configurations, with a margin of just over 450 MB/s between the best (all-cores) and worst (2+2+12) configurations.
@ -89,7 +89,7 @@ System load also follows an interesting trend. The highest load on nodes 1 and 2
This is overall an interesting result and, as will be shown below, the outlier in terms of all-core configuration performance. It does not adhere to the hypothesis, and provides a "yes" answer for the first question (thus negating the second).
![Sequential Write Bandwidth (MB/s, 4M block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/seq-write.png)
![Sequential Write Bandwidth (MB/s, 4M block size, 64 queue depth)](seq-write.png)
Sequential write shows a much more consistent result in line with the hypothesis above, and providing a clear "no" answer for the first question and a fairly clear point of diminishing returns for the second. The overall margin between the configurations is minimal, with just 17 MB/s of performance difference between the best (2+6+8) and worst (2+2+12) configurations.
@ -101,7 +101,7 @@ System load also follows a general upwards trend, indicating better overall CPU
Random IO tests tend to better reflect the realities of VM clusters, and thus are likely the most applicable to PVC.
![Random Read IOs (IOPS, 4k block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/rand-read.png)
![Random Read IOs (IOPS, 4k block size, 64 queue depth)](rand-read.png)
Random read, like sequential write above, shows a fairly consistent upward trend in line with the the original hypothesis, as well as clear answers to the two questions ("no", and "any limit"). The spread here is quite significant, with the difference between the best (2+6+8) and worst (all-cores) configurations being over 4100 IOs per second; this can be quite significant when speaking of many dozens of VMs doing random data operations in parallel.
@ -111,7 +111,7 @@ This test definitely points towards a trade-off between VM CPU allocations and m
System load follows a similar result to the sequential read tests, with more significant load on the testing node for the all-core and 2+2+12 configurations, before balancing out more in the 2+6+8 configuration.
![Random Write IOs (IOPS, 4k block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/rand-write.png)
![Random Write IOs (IOPS, 4k block size, 64 queue depth)](rand-write.png)
Random write again continues a general trend in line with the hypothesis and providing nearly the same answers as the sequential write tests, with a similar precipitous drop for the 2+2+12 configuration versus the all-core configuration, before rebounding and increasing with the 2+4+10 and 2+6+8 configurations. The overall margin is a very significant 7832 IOs per second between the worst (2+2+12) and best (2+6+8) tests, more than double the performance.
@ -125,13 +125,13 @@ Latency tests show the "best case" scenarios for the time individual writes can
These tests are based on the 95th percentile latency numbers; thus, these are the times in which 95% of operations will have completed, ignoring the outlying 5%. Though not shown here, the actual FIO test results show a fairly consistent spread up until the 99.9th percentile, so this number was chosen as a "good average" for everyday performance.
![Read Latency (μs, 4k block size, 1 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/latency-read.png)
![Read Latency (μs, 4k block size, 1 queue depth)](latency-read.png)
Read latency shows a consistent downwards trend like most of the tests so far, with a relatively large drop from the all-cores configuration to the 2+2+12 limited configuration, followed by steady decreases through each subsequent increase in cores. This does seem to indicate a clear benefit towards limiting CPUs, though like the random read tests, the point of diminishing returns comes fairly quickly.
System load also follows another hockey-stick-converging pattern, showing that CPU utilization is definitely correlated with the lower latency as the number of dedicated cores increases.
![Write Latency (μs, 4k block size, 1 queue depth)](/images/pvc-ceph-tuning-adventures-part-2/latency-write.png)
![Write Latency (μs, 4k block size, 1 queue depth)](latency-write.png)
Write latency shows another result consistent with the other write tests, where the 2+2+12 configuration fares (slightly) worse than the all-cores configuration before rebounding. Here the latency difference becomes significant, with the spread of 252 μs being enough to become noticeable in high-performance applications. There is also no clear point of diminishing returns, just like the other write tests.

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

@ -1,12 +1,12 @@
+++
date = "2023-07-29T00:00:00-05:00"
tags = ["systems administration", "pvc","ceph"]
title = "Adventures in Ceph tuning, part 3"
description = "A second follow-up to my analysis of Ceph system tuning for Hyperconverged Infrastructure"
type = "post"
weight = 1
draft = false
+++
---
title: "Adventures in Ceph tuning, part 3"
description: "A second follow-up to my analysis of Ceph system tuning for Hyperconverged Infrastructure"
date: 2023-07-29
tags:
- PVC
- Development
- Systems Administration
---
In 2021, [I made a post](https://www.boniface.me/pvc-ceph-tuning-adventures/) about Ceph storage tuning with my [my Hyperconverged Infrastructure (HCI) project PVC](https://github.com/parallelvirtualcluster/pvc), and in 2022 [I wrote a follow-up](https://www.boniface.me/pvc-ceph-tuning-adventures-part-2/) clarifying the test methodology with an upgraded hardware specification.
@ -85,7 +85,7 @@ Similarly, our hypothesis - that more dedicated OSD CPUs is better - and open qu
Sequential bandwidth tests tend to be "ideal situation" tests, not necessarily applicable to VM workloads except in very particular circumstances. However they can be useful for seeing the absolute maximum raw throughput performance that can be attained by the storage subsystem.
![Sequential Read Bandwidth (MB/s, 4M block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/seq-read.png)
![Sequential Read Bandwidth (MB/s, 4M block size, 64 queue depth)](seq-read.png)
Sequential read shows a significant difference with the NVMe SSDs and newer CPUs versus the SATA SSDs in the previous post, beyond just the near doubling of speed thanks to the higher performance of the NVMe drives. In that post, no-limit sequential read was by far the highest, and this was an outlier result.
@ -97,7 +97,7 @@ Thus, this test upholds the hypothesis: a limit is a good thing to avoid schedul
CPU load does show an interesting drop with the 4+3+25 configuration before jumping back up in the 4+4+24 configuration, however all nodes track each other, and the node with the widest swing (node3) was not a coordinator in any of the tests, so this is likely due to the VMs rather than the OSD processes.
![Sequential Write Bandwidth (MB/s, 4M block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/seq-write.png)
![Sequential Write Bandwidth (MB/s, 4M block size, 64 queue depth)](seq-write.png)
Sequential write shows a similar stair-step pattern, though more pronounced. The no-limit performance is actually the second-best here, which is an interesting result, though again the results are all within a nearly margin-of-error 2% of each other.
@ -107,7 +107,7 @@ System load follows the same trend as did sequential reads, with a drop off for
Finally, in watching the live results, there was full saturation of the 10GbE NIC during this test:
![Sequential Write Network Bandwidth](/images/pvc-ceph-tuning-adventures-part-3/seq-write-network.png)
![Sequential Write Network Bandwidth](seq-write-network.png)
This is completely expected, since our configuration uses a `copies=3` replication mode, so we should expect about 50% of the performance of the sequential reads, since every write is replicated over the network twice. It definitely proves that our limitation here is not the drives but the network, but also shows that this is not completely linear, since instead of 50% we're actually seeing about 70% of the maximum network bandwidth in actual performance.
@ -115,7 +115,7 @@ This is completely expected, since our configuration uses a `copies=3` replicati
Random IO tests tend to better reflect the realities of VM clusters, and thus are likely the most applicable to PVC.
![Random Read IOs (IOPS, 4k block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/rand-read.png)
![Random Read IOs (IOPS, 4k block size, 64 queue depth)](rand-read.png)
Random read shows a similar trend as sequential reads, and one completely in-line with our hypothesis. There is definitely a more pronounced trend here though, with a clear increase in performance of about 8% between the worst (4+1+27) and best (4+8+24) results.
@ -123,7 +123,7 @@ However this test shows yet another stair-step pattern where the 4+2+26 configur
System load continues to show almost no correlation at all with performance, and thus can be ignored.
![Random Write IOs (IOPS, 4k block size, 64 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/rand-write.png)
![Random Write IOs (IOPS, 4k block size, 64 queue depth)](rand-write.png)
Random writes bring back the strange anomaly that we saw with sequential reads in the previous post. Namely, that for some reason, the no-limit configuration performs significantly better than all limits. After that, the performance seems to scale roughly linearly with each increase in CPU core count, exactly as was seen with the SATA SSDs in the previous post.
@ -139,11 +139,11 @@ Latency tests show the "best case" scenarios for the time individual writes can
These tests are based on the 95th percentile latency numbers; thus, these are the times in which 95% of operations will have completed, ignoring the outlying 5%. Though not shown here, the actual FIO test results show a fairly consistent spread up until the 99.9th percentile, so this number was chosen as a "good average" for everyday performance.
![Read Latency (μs, 4k block size, 1 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/latency-read.png)
![Read Latency (μs, 4k block size, 1 queue depth)](latency-read.png)
Read latency shows a consistent downwards trend throughout the configurations, though with the 4+4+24 and 4+8+24 results being outliers. However the latency here is very good, only 1/4 of the latency of the SATA SSDs in the previous post, and the results are all so low that they are not likely to be particularly impactful. We're really pushing raw network latency and packet processing overheads with these results.
![Write Latency (μs, 4k block size, 1 queue depth)](/images/pvc-ceph-tuning-adventures-part-3/latency-write.png)
![Write Latency (μs, 4k block size, 1 queue depth)](latency-write.png)
Write latency also shows a major improvement over SATA SSDs, being only 1/5 of those results. It also, like the read latency, shows a fairly limited spread in results, though with a similar uptick from 4+3+25 to 4+4+24 to 4+8+20. Like read latency, I don't believe these numbers are significant enough to show a major benefit to the CPU limits.

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 57 KiB

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

Before

Width:  |  Height:  |  Size: 63 KiB

After

Width:  |  Height:  |  Size: 63 KiB

View File

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -1,12 +1,12 @@
+++
date = "2021-10-01T00:34:00-04:00"
tags = ["systems administration", "pvc","ceph"]
title = "Adventures in Ceph tuning"
description = "An analysis of Ceph system tuning for Hyperconverged Infrastructure"
type = "post"
weight = 1
draft = false
+++
---
title: "Adventures in Ceph tuning"
description: "An analysis of Ceph system tuning for Hyperconverged Infrastructure"
date: 2021-10-01
tags:
- PVC
- Development
- Systems Administration
---
In early 2018, I started work on [my Hyperconverged Infrastructure (HCI) project PVC](https://github.com/parallelvirtualcluster/pvc). Very quickly, I decided to use Ceph as the storage backend, for a number of reasons, including its built-in host-level redundancy, self-managing and self-healing functionality, and general good performance. With PVC now being used in numerous production clusters, I decided to tackle optimization. This turned out to be a bit of rabbit hole, which I will detail below. Happy reading.
@ -76,22 +76,22 @@ Each test, in each configuration mode, was run 3 times, with the results present
These two tests measure raw sequential throughput at a very large block size and relatively high queue depth.
![Sequential Read Bandwidth, 4M block size, 64 queue depth](/images/pvc-ceph-tuning-adventures/seq-bw-4m-read.png)
![Sequential Write Bandwidth, 4M block size, 64 queue depth](/images/pvc-ceph-tuning-adventures/seq-bw-4m-write.png)
![Sequential Read Bandwidth, 4M block size, 64 queue depth](seq-bw-4m-read.png)
![Sequential Write Bandwidth, 4M block size, 64 queue depth](seq-bw-4m-write.png)
#### Test Suite 2: Random Read/Write IOPS, 4k block size, 64-depth queue
These two tests measure IOPS performance at a very small block size and relatively high queue depth.
![Random Read IOPS, 4k block size, 64 queue depth](/images/pvc-ceph-tuning-adventures/random-iops-4k-read.png)
![Random Write IOPS, 4k block size, 64 queue depth](/images/pvc-ceph-tuning-adventures/random-iops-4k-write.png)
![Random Read IOPS, 4k block size, 64 queue depth](random-iops-4k-read.png)
![Random Write IOPS, 4k block size, 64 queue depth](random-iops-4k-write.png)
#### Test Suite 3: Random Read/Write Latency, 4k block size, 1-depth queue
These two tests measure average request latency at a very small block size and single queue depth.
![Random Read Latency, 4k block size, 1 queue depth](/images/pvc-ceph-tuning-adventures/random-latency-4k-1q-read.png)
![Random Write Latency, 4k block size, 1 queue depth](/images/pvc-ceph-tuning-adventures/random-latency-4k-1q-write.png)
![Random Read Latency, 4k block size, 1 queue depth](random-latency-4k-1q-read.png)
![Random Write Latency, 4k block size, 1 queue depth](random-latency-4k-1q-write.png)
## Benchmark Analysis

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -1,14 +1,11 @@
+++
class = "post"
date = "2018-03-12T13:00:00-05:00"
tags = ["diy","automation","buildlog"]
title = "Self-Hosted Voice Control (for Paranoids)"
description = "Building a self-hosted voice interface for HomeAssistant"
type = "post"
weight = 1
+++
---
title: "Self-Hosted Voice Control (for Paranoids)"
description: "Building a self-hosted voice interface for HomeAssistant"
date: 2018-03-12
tags:
- DIY
- Technology
---
#### _Building a self-hosted voice interface for HomeAssistant_

View File

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 158 KiB

View File

Before

Width:  |  Height:  |  Size: 175 KiB

After

Width:  |  Height:  |  Size: 175 KiB

View File

Before

Width:  |  Height:  |  Size: 300 KiB

After

Width:  |  Height:  |  Size: 300 KiB

View File

Before

Width:  |  Height:  |  Size: 208 KiB

After

Width:  |  Height:  |  Size: 208 KiB

View File

@ -1,18 +1,15 @@
+++
class = "post"
date = "2024-04-23T00:00:00-04:00"
tags = ["automation", "DIY"]
title = "The SuperSensor: Your all-in-one Home Assistant satellite"
description = "My own take on the multi-function Home Assistant sensor and voice hub"
type = "post"
weight = 1
+++
---
title: "The SuperSensor: Your all-in-one Home Assistant satellite"
description: "My own take on the multi-function Home Assistant sensor and voice hub"
date: 2024-04-23
tags:
- DIY
- Home Automation
---
## The Motivations
I've been interested in voice-based home automation for many years now; in fact, it was [one of the first posts on this blog](/self-hosted-voice-control/). For many years I now I've used it to control the lights in my bedroom, for two major reasons: first, the lights I was using were not hardwired, and I had many of them, and thus many little switches on cords; second, I wanted to be able to switch things on and off from wherever I was - be it my bed, my couch, or just outside the room as I was leaving - without having to fiddle with all those switches.
I've been interested in voice-based home automation for many years now; in fact, it was [one of the first posts on this blog](/posts/self-hosted-voice-control/). For many years I now I've used it to control the lights in my bedroom, for two major reasons: first, the lights I was using were not hardwired, and I had many of them, and thus many little switches on cords; second, I wanted to be able to switch things on and off from wherever I was - be it my bed, my couch, or just outside the room as I was leaving - without having to fiddle with all those switches.
So I went with [Home Assistant](https://www.home-assistant.io/), then and now the *de facto* FLOSS standard for home automation. I bought a few smart plugs, from various manufacturers throughout the years (currently settled very nicely on [Athom ESPHome-based ones](https://www.athom.tech/)). And I set up [Kalliope](https://github.com/kalliope-project/kalliope) on a Raspberry Pi to do it. And it did work wonderfully for a very long time, with some fits and starts at times.
@ -92,35 +89,35 @@ What would a DIY post be without pictures? Here's a few!
Here's an overall shot showing both a completed unit and the breakout of all the parts.
![Parts](/images/supersensor/parts.jpg)
![Parts](parts.jpg)
Here is the blank PCB, from both the front and back. As mentioned above this is a prototype board, so while there are some differences from the final PCB, the overall layout is correct.
![Blank PCB Front](/images/supersensor/pcb-front.jpg)
![Blank PCB Back](/images/supersensor/pcb-back.jpg)
![Blank PCB Front](pcb-front.jpg)
![Blank PCB Back](pcb-back.jpg)
As part of testing all the sensors, I made a socketed version. While this makes the unit extremely thick, it might be a good idea to build one of these to test all your sensors before proceeding with the meticulous soldering of all the components to the boards, because de-soldering them later is basically impossible (godspeed to the BME680 sensor that did not make it).
![Socketed PCB](/images/supersensor/socketed-pcb.jpg)
![Socketed PCB](socketed-pcb.jpg)
Here is a completed board, from the front, back, back without the ESP32 installed, and short side. The ESP32 is socketed on the final boards, both to provide good airflow and to allow quick swapping of the "brains" of individual units if needed, but all the sensors are soldered directly to the board to keep the profile low.
![Completed Board Front](/images/supersensor/front.jpg)
![Completed Board Back](/images/supersensor/back.jpg)
![Completed Board Back w/o ESP](/images/supersensor/back-no-esp.jpg)
![Completed Board Side](/images/supersensor/side.jpg)
![Completed Board Front](front.jpg)
![Completed Board Back](back.jpg)
![Completed Board Back w/o ESP](back-no-esp.jpg)
![Completed Board Side](side.jpg)
Here is one of the boards in its final mounted location, angled to provide perfect coverage of my garage. Due to where it sits, I had to bodge a makeshift antenna extension onto this one to get a decent WiFi connection, but it works well and this hasn't been needed for any of my other ones.
![Mounted Board](/images/supersensor/mounted.jpg)
![Mounted Board](mounted.jpg)
Here is all the information and configuration the SuperSensor provides in Home Assistant.
![Home Assistant Dashboard](/images/supersensor/dashboard.png)
![Home Assistant Dashboard](dashboard.png)
That is quite a lot of information, so in my actual dashboards I usually only show the most relevant parts for that particular use-case, like this one for my garage sensor.
![Room Dashboard](/images/supersensor/room.png)
![Room Dashboard](room.png)
Finally, here is a video demonstration of the voice control in action. This shows the LED feedback colours for listening (blue), processing (cyan), and both positive (green) and negative (red) responses in lieu of a voice response.

View File

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 88 KiB

View File

Before

Width:  |  Height:  |  Size: 186 KiB

After

Width:  |  Height:  |  Size: 186 KiB

View File

Before

Width:  |  Height:  |  Size: 170 KiB

After

Width:  |  Height:  |  Size: 170 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View File

Before

Width:  |  Height:  |  Size: 192 KiB

After

Width:  |  Height:  |  Size: 192 KiB

151
hugo.toml Normal file
View File

@ -0,0 +1,151 @@
title = "Joshua Boniface, sysadmin"
baseURL = 'https://www.boniface.me'
# This is what goes in <html lang="">
languageCode = 'en-ca'
# This defines how dates are formatted
defaultContentLanguage = "en-ca"
# Enable emojis globally
enableEmoji = true
ignoreErrors = ["additional-script-loading-error"] # ignore error of loading additional scripts.
# traditional way: theme component resides in directory 'themes'
theme = "hugo-blog-awesome"
# modern way: pull in theme component as hugo module
#[module]
# Uncomment the next line to build and serve using local theme clone declared in the named Hugo workspace:
# workspace = "hugo-blog-awesome.work"
#[module.hugoVersion]
#extended = true
#min = "0.87.0"
#[[module.imports]]
#path = "github.com/hugo-sid/hugo-blog-awesome"
#disable = false
[services]
# To enable Google Analytics 4 (gtag.js) provide G-MEASUREMENT_ID below.
# To disable Google Analytics, simply leave the field empty or remove the next two lines
# [services.googleAnalytics]
# id = '' # G-MEASUREMENT_ID
# To enable Disqus comments, provide Disqus Shortname below.
# To disable Disqus comments, simply leave the field empty or remove the next two lines
# [services.disqus]
# shortname = ''
# set markup.highlight.noClasses=false to enable code highlight
[markup]
[markup.highlight]
noClasses = false
[markup.goldmark.renderer]
unsafe = true
[markup.tableOfContents]
startLevel = 2 # ToC starts from H2
endLevel = 4 # ToC ends at H4
ordered = false # generates <ul> instead of <ol>
############################## English language ################################
[Languages.en-ca]
languageName = "English"
languageCode = "en-ca"
contentDir = "content/en"
weight = 1
[Languages.en-ca.menu]
[[Languages.en-ca.menu.main]]
# The page reference (pageRef) is useful for menu highlighting
# When pageRef is set, setting `url` is optional; it will be used as a fallback if the page is not found.
pageRef="/"
name = 'Home'
url = '/'
weight = 10
[[Languages.en-ca.menu.main]]
pageRef="posts"
name = 'Posts'
url = '/posts/'
weight = 20
[[Languages.en-ca.menu.main]]
pageRef="cv"
name = 'CV'
url = '/cv/'
weight = 30
[[Languages.en-ca.menu.main]]
pageRef="hardware"
name = 'Hardware'
url = '/hardware/'
weight = 40
[[Languages.en-ca.menu.main]]
pageRef="legal"
name = 'Legal'
url = '/legal/'
weight = 40
[Languages.en-ca.params]
sitename = "Joshua Boniface, sysadmin"
defaultColor = "dark" # set color mode: dark, light, auto
# Setting it to 'auto' applies the color scheme based on the visitor's device color preference.If you don't specify anything, ignore this parameter, or leave it blank,
# the default value is set to 'auto'.
# You can take a look at layouts/index.html for more information.
description = "A blog about tech and shiny things; self-hosted and FLOSS"
mainSections = ['posts']
toc = true # set to false to disable table of contents 'globally'
tocOpen = false # set to true to open table of contents by default
goToTop = true # set to false to disable 'go to top' button
#additionalScripts = ['js/custom.js', 'js/custom-2.js']
# Will try to load 'assets/js/custom.js' and 'assets/js/custom-2.js'.
# Your custom scripts will be concatenated to one file `custom.js`.
# When building for production it will be minified.
# The file `custom.js` is loaded on each page (before body tag ends).
dateFormat = "" # date format used to show dates on various pages. If nothing is specified, then "2 Jan 2006" format is used.
# See https://gohugo.io/functions/format/#hugo-date-and-time-templating-reference for available date formats.
rssFeedDescription = "summary" # available options: 1) summary 2) full
# summary - includes a short summary of the blog post in the RSS feed. Generated using Hugo .Summary .
# full - includes full blog post in the RSS feed. Generated using Hugo .Content .
# By default (or if nothing is specified), summary is used.
[Languages.en-ca.params.author]
avatar = "/images/joshua.jpg" # put the file in assets folder; also ensure that image has same height and width
# Note: image is not rendered if the resource(avatar image) is not found. No error is displayed.
intro = "Joshua Boniface, sysadmin"
name = "Joshua M. Boniface"
description = "A blog about tech and shiny things; self-hosted and FLOSS"
# Allow to override webmanifest options
[Languages.en-ca.params.webmanifest]
name = "sitename" # will use "params.sitename" or "title" by default
short_name = "sitename" # same as name
start_url = "/" # will use homepage url by default
theme_color = "#434648" # default is "#434648" (base color of text). Also will override html `<meta name="theme-color" />`
background_color = "#fff" # by default depend on "params.defaultColor" for "light" or "auto" will be set to "#fff" for dark will be "#131418" (color of dark mode background)
display = "standalone"
# Allow to override `browserconfig.xml` params (configuration for windows embedded browsers)
[params.browserconfig]
TileColor = "#2d89ef" # default windows 10 blue tile color
[[params.socialIcons]]
name = "github"
url = "https://github.com/joshuaboniface"
[[params.socialIcons]]
name = "linkedin"
url = "https://www.linkedin.com/in/joshuamboniface"
[[params.socialIcons]]
name = "youtube"
url = "https://www.youtube.com/@joshuaboniface"
[[params.socialIcons]]
name = "mastodon"
url = "https://social.bonifacelabs.ca/@joshuaboniface"
[[params.socialIcons]]
name = "reddit"
url = "https://old.reddit.com/u/djbon2112"
[[params.socialIcons]]
name = "Rss"
url = "/index.xml"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 638 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.2 MiB

@ -0,0 +1 @@
Subproject commit 1cff415a9499168f8b16ba521fc497399c72f460

@ -1 +0,0 @@
Subproject commit a5c338c6998dc6de769c176b8fb2579d6afd9158