<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <updated>2026-03-30T22:55:17+03:00</updated>
    <title>foo.zone feed</title>
    <subtitle>To be in the .zone!</subtitle>
    <link href="https://foo.zone/gemfeed/atom.xml" rel="self" />
    <link href="https://foo.zone/" />
    <id>https://foo.zone/</id>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</title>
        <link href="https://foo.zone/gemfeed/2026-04-02-f3s-kubernetes-with-freebsd-part-9.html" />
        <id>https://foo.zone/gemfeed/2026-04-02-f3s-kubernetes-with-freebsd-part-9.html</id>
        <updated>2026-04-02T00:00:00+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the 9th post in the f3s series about my self-hosting home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-9-gitops-with-argocd'>f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</h1><br />
<br />
<span class='quote'>Published at 2026-04-02T00:00:00+03:00</span><br />
<br />
<span>This is the 9th post in the f3s series about my self-hosting home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD (You are currently reading this)</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-9/argocd-app-tree.png'><img alt='ArgoCD Application Resource Tree' title='ArgoCD Application Resource Tree' src='./f3s-kubernetes-with-freebsd-part-9/argocd-app-tree.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-9-gitops-with-argocd'>f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#gitops-in-a-nutshell'>GitOps in a Nutshell</a></li>
<li>⇢ <a href='#argocd'>ArgoCD</a></li>
<li>⇢ <a href='#why-bother-for-a-home-lab'>Why Bother for a Home Lab?</a></li>
<li>⇢ <a href='#deploying-argocd'>Deploying ArgoCD</a></li>
<li>⇢ ⇢ <a href='#accessing-argocd'>Accessing ArgoCD</a></li>
<li>⇢ <a href='#in-cluster-git-server'>In-Cluster Git Server</a></li>
<li>⇢ <a href='#repository-organization'>Repository Organization</a></li>
<li>⇢ <a href='#migrating-an-app-miniflux-as-example'>Migrating an App: Miniflux as Example</a></li>
<li>⇢ ⇢ <a href='#migration-order'>Migration Order</a></li>
<li>⇢ <a href='#complex-migration-prometheus-multi-source'>Complex Migration: Prometheus Multi-Source</a></li>
<li>⇢ ⇢ <a href='#sync-waves'>Sync Waves</a></li>
<li>⇢ <a href='#the-result'>The Result</a></li>
<li>⇢ <a href='#what-changed-day-to-day'>What Changed Day-to-Day</a></li>
<li>⇢ <a href='#challenges-along-the-way'>Challenges Along the Way</a></li>
<li>⇢ ⇢ <a href='#helm-release-adoption'>Helm Release Adoption</a></li>
<li>⇢ ⇢ <a href='#persistentvolumes'>PersistentVolumes</a></li>
<li>⇢ ⇢ <a href='#secrets'>Secrets</a></li>
<li>⇢ ⇢ <a href='#grafana-not-reloading'>Grafana Not Reloading</a></li>
<li>⇢ ⇢ <a href='#prometheus-multi-source-ordering'>Prometheus Multi-Source Ordering</a></li>
<li>⇢ <a href='#wrapping-up'>Wrapping Up</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In previous posts, I deployed applications to the k3s cluster using Helm charts and Justfiles--running <span class='inlinecode'>just install</span> or <span class='inlinecode'>just upgrade</span> to push changes to the cluster. That worked, but it had some drawbacks:</span><br />
<br />
<ul>
<li>No single source of truth--cluster state depends on which commands were run and when</li>
<li>Every change requires manually running commands</li>
<li>No easy way to tell if the cluster drifted from the desired config</li>
<li>Rolling back means re-running old Helm commands</li>
<li>No audit trail for who changed what</li>
</ul><br />
<span>So I migrated everything to GitOps with ArgoCD. Now the Git repo is the single source of truth, and ArgoCD keeps the cluster in sync automatically.</span><br />
<br />
<h2 style='display: inline' id='gitops-in-a-nutshell'>GitOps in a Nutshell</h2><br />
<br />
<span>Describe your entire desired state in Git, and let an agent in the cluster pull that state and reconcile it continuously. Every change goes through a commit, so you get version history, collaboration, and rollback for free.</span><br />
<br />
<span>For Kubernetes specifically:</span><br />
<br />
<ul>
<li>All manifests, Helm charts, and config live in a Git repo</li>
<li>ArgoCD watches that repo</li>
<li>Push a change, ArgoCD applies it</li>
<li>If someone manually tweaks something in the cluster, ArgoCD detects the drift and reverts it</li>
</ul><br />
<h2 style='display: inline' id='argocd'>ArgoCD</h2><br />
<br />
<span>ArgoCD is a GitOps CD tool for Kubernetes. It runs as a controller in the cluster, constantly comparing what&#39;s running against what&#39;s in Git.</span><br />
<br />
<a class='textlink' href='https://argo-cd.readthedocs.io'>ArgoCD Documentation</a><br />
<br />
<span>The features I care about most for f3s:</span><br />
<br />
<ul>
<li>Automatic sync--monitors Git and applies changes to the cluster</li>
<li>Application CRDs--each app is a Kubernetes custom resource</li>
<li>Health checks--knows whether an app is healthy or degraded</li>
<li>Web UI--visual overview of all applications and their sync status</li>
<li>Sync waves and hooks--control deployment order and run post-deploy jobs</li>
<li>Multi-source--combine upstream Helm charts with custom manifests</li>
</ul><br />
<h2 style='display: inline' id='why-bother-for-a-home-lab'>Why Bother for a Home Lab?</h2><br />
<br />
<span>Honestly, the biggest reason is disaster recovery. If the cluster dies, I can:</span><br />
<br />
<ul>
<li>Bootstrap a fresh k3s cluster</li>
<li>Install ArgoCD</li>
<li>Point it at the Git repo</li>
<li>Everything deploys automatically</li>
</ul><br />
<span>That&#39;s it. No "let me check my shell history to remember how I set this up."</span><br />
<br />
<span>It&#39;s also a great way to learn. Setting up GitOps for real--even on a small cluster--teaches you things you won&#39;t pick up from tutorials alone. Debugging sync issues, figuring out sync waves, dealing with secrets management--all stuff that&#39;s directly applicable at work too.</span><br />
<br />
<span>Beyond that: push to Git, things deploy. No SSH&#39;ing to a workstation to run Helm commands. And if I manually tweak something while debugging and forget about it, ArgoCD reverts it back to the desired state. That&#39;s happened more than once.</span><br />
<br />
<h2 style='display: inline' id='deploying-argocd'>Deploying ArgoCD</h2><br />
<br />
<span>ArgoCD manages everything else via GitOps, but ArgoCD itself needs a bootstrap. Chicken-and-egg problem.</span><br />
<br />
<span>The installation lives in the config repo:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/argocd'>codeberg.org/snonux/conf/f3s/argocd</a><br />
<br />
<span>I deployed it using Helm via a Justfile:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd conf/f3s/argocd
$ just install
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
kubectl create namespace cicd
kubectl apply -f persistent-volumes.yaml
helm install argocd argo/argo-cd --namespace cicd -f values.yaml
kubectl apply -f ingress.yaml
</pre>
<br />
<span>Some highlights from <span class='inlinecode'>values.yaml</span>:</span><br />
<br />
<span>Persistent storage for the repo-server so cloned Git repos survive pod restarts:</span><br />
<br />
<pre>
repoServer:
  volumes:
    - name: repo-server-data
      persistentVolumeClaim:
        claimName: argocd-repo-server-pvc
  volumeMounts:
    - name: repo-server-data
      mountPath: /home/argocd/repo-cache
  env:
    - name: XDG_CACHE_HOME
      value: /home/argocd/repo-cache
</pre>
<br />
<span>Server runs in insecure mode since TLS is terminated by the OpenBSD edge relays (same pattern as all other f3s services):</span><br />
<br />
<pre>
server:
  insecure: true
configs:
  params:
    server.insecure: true
</pre>
<br />
<span>Dex (SSO) and notifications are disabled--overkill for a single-user home lab:</span><br />
<br />
<pre>
dex:
  enabled: false
notifications:
  enabled: false
</pre>
<br />
<span>The admin password is auto-generated on first install and stored in <span class='inlinecode'>argocd-initial-admin-secret</span>. It&#39;s preserved across Helm upgrades, so no manual secret creation needed:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ just get-password
<i><font color="silver"># Reads from argocd-initial-admin-secret</font></i>
</pre>
<br />
<h3 style='display: inline' id='accessing-argocd'>Accessing ArgoCD</h3><br />
<br />
<span>After deployment, ArgoCD runs in the <span class='inlinecode'>cicd</span> namespace:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get pods -n cicd
NAME                                                READY   STATUS    RESTARTS   AGE
argocd-application-controller-<font color="#000000">0</font>                     <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          45d
argocd-applicationset-controller-66d6b9b8f4-vhm9k   <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          45d
argocd-redis-77b8d6c6d4-mz9hg                       <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          45d
argocd-repo-server-5f98f77b97-8xtcq                 <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          45d
argocd-server-6b9c4b4f8d-kxw7p                      <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          45d
</pre>
<br />
<a href='./f3s-kubernetes-with-freebsd-part-9/argocd-login.png'><img alt='ArgoCD login page' title='ArgoCD login page' src='./f3s-kubernetes-with-freebsd-part-9/argocd-login.png' /></a><br />
<br />
<span>The ingress exposes both a WAN and LAN endpoint:</span><br />
<br />
<pre>
# WAN access (via OpenBSD relayd)
- host: argocd.f3s.foo.zone
# LAN access (via FreeBSD CARP VIP, with TLS)
- host: argocd.f3s.lan.foo.zone
</pre>
<br />
<h2 style='display: inline' id='in-cluster-git-server'>In-Cluster Git Server</h2><br />
<br />
<span>I didn&#39;t want ArgoCD pulling from Codeberg over the internet every time it checks for changes. If Codeberg is down (or my internet is), the cluster can&#39;t reconcile. So I set up a Git server inside the cluster itself.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/commit/190473b/f3s/git-server'>codeberg.org/snonux/conf/f3s/git-server (at 190473b)</a><br />
<br />
<span>The git-server runs as a single pod in the <span class='inlinecode'>cicd</span> namespace with two containers sharing a PVC:</span><br />
<br />
<ul>
<li>An SSH git server (Alpine + OpenSSH + git-shell) for pushing changes from my laptop</li>
<li>A CGit web UI with git-http-backend (nginx + fcgiwrap) for browsing repos and HTTP clones</li>
</ul><br />
<span>ArgoCD uses the HTTP backend to clone repos. Most Application manifests point at:</span><br />
<br />
<pre>
http://git-server.cicd.svc.cluster.local/conf.git
</pre>
<br />
<span>For pushing, I use SSH via a NodePort (30022). The git user is locked down to git-shell--no actual shell access. SSH keys are managed through a Kubernetes Secret.</span><br />
<br />
<span>There&#39;s a chicken-and-egg situation here. The git-server&#39;s own ArgoCD Application manifest points at Codeberg (not at itself), since ArgoCD needs to bootstrap the git-server before it can use it:</span><br />
<br />
<pre>
# argocd-apps/cicd/git-server.yaml
source:
  repoURL: https://codeberg.org/snonux/conf.git
  targetRevision: master
  path: f3s/git-server/helm-chart
</pre>
<br />
<span>Once the pod is up, all other apps use the in-cluster URL. The dependency chain is: Codeberg -&gt; git-server -&gt; everything else.</span><br />
<br />
<span>The repo storage lives on NFS. Initial setup was just cloning the Codeberg repo as a bare repo into the NFS volume, then pointing my laptop&#39;s git remote at the NodePort:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ git remote add f3s f3s-git:/repos/conf.git
$ git push f3s master
</pre>
<br />
<span>ArgoCD detects the change within a few minutes and syncs. No internet required. The whole thing is intentionally minimal--no database, no accounts, no webhooks. Just git over SSH for writes and HTTP for reads.</span><br />
<br />
<h2 style='display: inline' id='repository-organization'>Repository Organization</h2><br />
<br />
<span>I reorganized the config repo for GitOps. Application manifests are grouped by namespace:</span><br />
<br />
<pre>
/home/paul/git/conf/f3s/
├── argocd-apps/
│   ├── cicd/                  # CI/CD tooling (2 apps)
│   │   ├── argo-rollouts.yaml
│   │   └── git-server.yaml
│   ├── infra/                 # Infrastructure (4 apps)
│   │   ├── cert-manager.yaml
│   │   ├── pkgrepo.yaml
│   │   ├── registry.yaml
│   │   └── traefik-config.yaml
│   ├── monitoring/            # Observability stack (6 apps)
│   │   ├── alloy.yaml
│   │   ├── grafana-ingress.yaml
│   │   ├── loki.yaml
│   │   ├── prometheus.yaml
│   │   ├── pushgateway.yaml
│   │   └── tempo.yaml
│   ├── services/              # User-facing applications (18 apps)
│   │   ├── anki-sync-server.yaml
│   │   ├── apache.yaml
│   │   ├── audiobookshelf.yaml
│   │   ├── filebrowser.yaml
│   │   ├── immich.yaml
│   │   ├── ipv6test.yaml
│   │   ├── jellyfin.yaml
│   │   ├── keybr.yaml
│   │   ├── kobo-sync-server.yaml
│   │   ├── miniflux.yaml
│   │   ├── navidrome.yaml
│   │   ├── opodsync.yaml
│   │   ├── pihole.yaml
│   │   ├── radicale.yaml
│   │   ├── syncthing.yaml
│   │   ├── tracing-demo.yaml
│   │   ├── wallabag.yaml
│   │   └── webdav.yaml
│   └── test/                  # Test/example applications
├── miniflux/                  # Per-app directories (unchanged)
│   ├── helm-chart/
│   │   ├── Chart.yaml
│   │   ├── values.yaml
│   │   └── templates/
│   └── Justfile
├── prometheus/
│   ├── manifests/             # Additional manifests for multi-source
│   └── Justfile
└── ...
</pre>
<br />
<span>The per-app directories (miniflux, prometheus, etc.) stayed the same--ArgoCD just points at the existing Helm charts. The main addition is the <span class='inlinecode'>argocd-apps/</span> tree and <span class='inlinecode'>manifests/</span> subdirectories for complex apps.</span><br />
<br />
<h2 style='display: inline' id='migrating-an-app-miniflux-as-example'>Migrating an App: Miniflux as Example</h2><br />
<br />
<span>I migrated all apps one at a time. Same procedure for each--here&#39;s miniflux as an example.</span><br />
<br />
<span>Before ArgoCD, the Justfile looked like this:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>install:
    kubectl apply -f helm-chart/persistent-volumes.yaml
    helm install miniflux ./helm-chart --namespace services

upgrade:
    helm upgrade miniflux ./helm-chart --namespace services

uninstall:
    helm uninstall miniflux --namespace services
</pre>
<br />
<span>Workflow: edit chart, run <span class='inlinecode'>just upgrade</span>, hope you didn&#39;t forget anything.</span><br />
<br />
<span>I created an Application manifest--this tells ArgoCD where the Helm chart lives and how to sync it:</span><br />
<br />
<pre>
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: miniflux
  namespace: cicd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: http://git-server.cicd.svc.cluster.local/conf.git
    targetRevision: master
    path: f3s/miniflux/helm-chart
  destination:
    server: https://kubernetes.default.svc
    namespace: services
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=false
    retry:
      limit: 3
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 1m
</pre>
<br />
<span>Then applied it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># 1. Apply the Application manifest</font></i>
$ kubectl apply -f argocd-apps/services/miniflux.yaml
application.argoproj.io/miniflux created

<i><font color="silver"># 2. Verify ArgoCD adopted the existing resources</font></i>
$ argocd app get miniflux
Name:               miniflux
Sync Status:        Synced to master (4e3c216)
Health Status:      Healthy

<i><font color="silver"># 3. Test that the app still works</font></i>
$ curl -I https://flux.f3s.foo.zone
HTTP/<font color="#000000">2</font> <font color="#000000">200</font>
</pre>
<br />
<span>About 10 minutes, zero downtime. ArgoCD saw that the running resources already matched the Helm chart in Git and just adopted them.</span><br />
<br />
<span>After that, the Justfile is just utility commands--no more install/upgrade/uninstall:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>status:
    @kubectl get pods -n services -l app=miniflux-server
    @kubectl get pods -n services -l app=miniflux-postgres
    @kubectl get application miniflux -n cicd \
        -o jsonpath=<font color="#808080">'Sync: {.status.sync.status}, Health: {.status.health.status}'</font>

sync:
    @kubectl annotate application miniflux -n cicd \
        argocd.argoproj.io/refresh=normal --overwrite

logs:
    kubectl logs -n services -l app=miniflux-server --tail=<font color="#000000">100</font> -f

restart:
    kubectl rollout restart -n services deployment/miniflux-server

port-forward port=<font color="#808080">"8080"</font>:
    kubectl port-forward -n services svc/miniflux {{port}}:<font color="#000000">8080</font>

psql:
    kubectl <b><u><font color="#000000">exec</font></u></b> -it -n services deployment/miniflux-postgres -- psql -U miniflux
</pre>
<br />
<span>New workflow: edit chart, commit, push. ArgoCD picks it up within a few minutes. Run <span class='inlinecode'>just sync</span> if you&#39;re impatient.</span><br />
<br />
<h3 style='display: inline' id='migration-order'>Migration Order</h3><br />
<br />
<span>I started with the simplest services (miniflux, wallabag, radicale, etc.)--apps with straightforward Helm charts and no complex dependencies. This let me validate the pattern before touching anything critical.</span><br />
<br />
<span>After that: infrastructure apps (registry, cert-manager, pkgrepo, traefik-config), then the monitoring stack (tempo, loki, alloy, and finally prometheus--the most complex one), and last the CI/CD tools (git-server, argo-rollouts).</span><br />
<br />
<h2 style='display: inline' id='complex-migration-prometheus-multi-source'>Complex Migration: Prometheus Multi-Source</h2><br />
<br />
<span>Prometheus was the tricky one--it combines an upstream Helm chart with a bunch of custom manifests (recording rules, dashboards, persistent volumes, a post-sync hook to restart Grafana).</span><br />
<br />
<span>ArgoCD&#39;s multi-source feature made this manageable:</span><br />
<br />
<pre>
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: prometheus
  namespace: cicd
spec:
  sources:
    # Source 1: Upstream Helm chart
    - repoURL: https://prometheus-community.github.io/helm-charts
      chart: kube-prometheus-stack
      targetRevision: 55.5.0
      helm:
        releaseName: prometheus
        valuesObject:
          kubeEtcd:
            enabled: true
            endpoints:
              - 192.168.2.120
              - 192.168.2.121
              - 192.168.2.122
          # ... hundreds of lines of config

    # Source 2: Custom manifests from Git
    - repoURL: http://git-server.cicd.svc.cluster.local/conf.git
      targetRevision: master
      path: f3s/prometheus/manifests

  syncPolicy:
    automated:
      prune: false  # Manual pruning--too risky for the monitoring stack
      selfHeal: true
    syncOptions:
      - ServerSideApply=true
</pre>
<br />
<span>The <span class='inlinecode'>prometheus/manifests/</span> directory has 13 files. Each one has a sync wave annotation that controls when it gets deployed:</span><br />
<br />
<pre>
f3s/prometheus/manifests/
├── persistent-volumes.yaml              # Wave 0
├── grafana-restart-rbac.yaml            # Wave 0
├── additional-scrape-configs-secret.yaml # Wave 1
├── grafana-datasources-configmap.yaml   # Wave 1
├── freebsd-recording-rules.yaml         # Wave 3
├── openbsd-recording-rules.yaml         # Wave 3
├── zfs-recording-rules.yaml             # Wave 3
├── argocd-application-alerts.yaml       # Wave 3
├── epimetheus-dashboard.yaml            # Wave 4
├── zfs-dashboards.yaml                  # Wave 4
├── argocd-applications-dashboard.yaml   # Wave 4
├── node-resources-multi-select-dashboard.yaml # Wave 4
├── prometheus-nodeport.yaml             # Wave 4
└── grafana-restart-hook.yaml            # Wave 10 (PostSync)
</pre>
<br />
<h3 style='display: inline' id='sync-waves'>Sync Waves</h3><br />
<br />
<span>By default, ArgoCD deploys everything at once in no particular order. Fine for simple apps, but Prometheus breaks--a PVC can&#39;t bind if the PV doesn&#39;t exist yet, and a PrometheusRule can&#39;t be created if the CRD hasn&#39;t been registered.</span><br />
<br />
<span>Sync waves fix this. You slap an annotation on each resource:</span><br />
<br />
<pre>
annotations:
  argocd.argoproj.io/sync-wave: "3"
</pre>
<br />
<span>ArgoCD deploys all wave 0 resources first, waits until they&#39;re healthy, then moves to wave 1, waits again, and so on. Resources without the annotation default to wave 0.</span><br />
<br />
<span>For the Prometheus stack, the waves look like this:</span><br />
<br />
<ul>
<li>Wave 0: PersistentVolumes, RBAC--infrastructure that everything else depends on</li>
<li>Wave 1: Secrets, ConfigMaps--config that Prometheus and Grafana need at startup</li>
<li>Wave 3: PrometheusRule CRDs--recording rules for FreeBSD, OpenBSD, ZFS, ArgoCD (the operator from wave 0 needs to be running first)</li>
<li>Wave 4: Dashboard ConfigMaps and nodeport config</li>
<li>Wave 10: PostSync hook--a Job that runs after all waves complete</li>
</ul><br />
<span>ArgoCD also supports lifecycle hooks (<span class='inlinecode'>PreSync</span>, <span class='inlinecode'>Sync</span>, <span class='inlinecode'>PostSync</span>) that run Jobs at specific points. The Grafana restart hook runs after every sync so Grafana picks up updated datasources and dashboards:</span><br />
<br />
<pre>
apiVersion: batch/v1
kind: Job
metadata:
  name: grafana-restart-hook
  namespace: monitoring
  annotations:
    argocd.argoproj.io/hook: PostSync
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
    argocd.argoproj.io/sync-wave: "10"
spec:
  template:
    spec:
      serviceAccountName: grafana-restart-sa
      restartPolicy: OnFailure
      containers:
        - name: kubectl
          image: bitnami/kubectl:latest
          command:
            - /bin/sh
            - -c
            - |
              kubectl wait --for=condition=available --timeout=300s \
                deployment/prometheus-grafana -n monitoring || true
              kubectl delete pod -n monitoring \
                -l app.kubernetes.io/name=grafana --ignore-not-found=true
  backoffLimit: 2
</pre>
<br />
<h2 style='display: inline' id='the-result'>The Result</h2><br />
<br />
<span>All 30 applications across 5 namespaces, synced and healthy:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ argocd app list
NAME                      CLUSTER                         NAMESPACE    PROJECT  STATUS  HEALTH   SYNCPOLICY
alloy                     https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto-Prune
anki-sync-server          https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
apache                    https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
argo-rollouts             https://kubernetes.default.svc  cicd         default  Synced  Healthy  Auto-Prune
audiobookshelf            https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
cert-manager              https://kubernetes.default.svc  infra        default  Synced  Healthy  Auto-Prune
filebrowser               https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
git-server                https://kubernetes.default.svc  cicd         default  Synced  Healthy  Auto-Prune
grafana-ingress           https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto-Prune
immich                    https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
ipv6test                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
jellyfin                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
keybr                     https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
kobo-sync-server          https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
loki                      https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto-Prune
miniflux                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
navidrome                 https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
opodsync                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
pihole                    https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
pkgrepo                   https://kubernetes.default.svc  infra        default  Synced  Healthy  Auto-Prune
prometheus                https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto
pushgateway               https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto-Prune
radicale                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
registry                  https://kubernetes.default.svc  infra        default  Synced  Healthy  Auto-Prune
syncthing                 https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
tempo                     https://kubernetes.default.svc  monitoring   default  Synced  Healthy  Auto-Prune
traefik-config            https://kubernetes.default.svc  infra        default  Synced  Healthy  Auto-Prune
tracing-demo              https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
wallabag                  https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
webdav                    https://kubernetes.default.svc  services     default  Synced  Healthy  Auto-Prune
</pre>
<br />
<a href='./f3s-kubernetes-with-freebsd-part-9/argocd-apps-list.png'><img alt='ArgoCD managing all 30 applications in the f3s cluster' title='ArgoCD managing all 30 applications in the f3s cluster' src='./f3s-kubernetes-with-freebsd-part-9/argocd-apps-list.png' /></a><br />
<br />
<h2 style='display: inline' id='what-changed-day-to-day'>What Changed Day-to-Day</h2><br />
<br />
<span>The practical difference is pretty big:</span><br />
<br />
<ul>
<li>Single source of truth--clone the repo, look at <span class='inlinecode'>argocd-apps/</span>, and you know exactly what&#39;s running. No more <span class='inlinecode'>helm list</span> or guessing.</li>
<li>Push and forget--edit a Helm value, commit, push. ArgoCD picks it up within a few minutes. No SSH, no <span class='inlinecode'>just upgrade</span>.</li>
<li>Self-healing--I&#39;ve tweaked things manually for debugging, forgotten about it, and ArgoCD quietly reverted it. That&#39;s saved me from some confusing "why is this behaving differently?" moments.</li>
<li>Rollback = git revert--<span class='inlinecode'>git revert HEAD &amp;&amp; git push</span> and ArgoCD syncs back to the previous state.</li>
<li>Disaster recovery--bootstrap k3s, install ArgoCD, apply the Application manifests, wait. The cluster rebuilds itself. I haven&#39;t had to do this for real yet, but I&#39;ve tested it and it works.</li>
<li>Drift detection--the ArgoCD UI shows immediately if something is out of sync. Much better than running <span class='inlinecode'>kubectl</span> commands and comparing output manually.</li>
</ul><br />
<h2 style='display: inline' id='challenges-along-the-way'>Challenges Along the Way</h2><br />
<br />
<h3 style='display: inline' id='helm-release-adoption'>Helm Release Adoption</h3><br />
<br />
<span>When ArgoCD tries to manage resources already deployed by Helm, it can get confused. Fix: make sure the Application manifest matches the current Helm values exactly. ArgoCD then recognizes the resources and adopts them.</span><br />
<br />
<h3 style='display: inline' id='persistentvolumes'>PersistentVolumes</h3><br />
<br />
<span>PVs are cluster-scoped, and many of my Helm charts created them with <span class='inlinecode'>kubectl apply</span> outside of Helm. For simple apps I moved PV definitions into the Helm chart templates. For complex apps like Prometheus, I used the multi-source pattern with PVs in a separate <span class='inlinecode'>manifests/</span> directory at sync wave 0.</span><br />
<br />
<h3 style='display: inline' id='secrets'>Secrets</h3><br />
<br />
<span>Secrets shouldn&#39;t live in Git as plaintext. For now, I create them manually with <span class='inlinecode'>kubectl create secret</span> and reference them from Helm charts. ArgoCD doesn&#39;t manage the secrets themselves. Works, but isn&#39;t fully declarative--External Secrets Operator is on the list.</span><br />
<br />
<h3 style='display: inline' id='grafana-not-reloading'>Grafana Not Reloading</h3><br />
<br />
<span>After updating datasource ConfigMaps, Grafana wouldn&#39;t notice until the pod was restarted. The PostSync hook (the Grafana restart Job in sync wave 10) handles this automatically now.</span><br />
<br />
<h3 style='display: inline' id='prometheus-multi-source-ordering'>Prometheus Multi-Source Ordering</h3><br />
<br />
<span>Without sync waves, Prometheus resources deployed in random order and things broke. PVs before PVCs, secrets before the operator, recording rules after the CRDs. Adding sync wave annotations to everything in <span class='inlinecode'>prometheus/manifests/</span> fixed it.</span><br />
<br />
<h2 style='display: inline' id='wrapping-up'>Wrapping Up</h2><br />
<br />
<span>The migration took a couple of days, doing one or two apps at a time. The result: 30 applications across 5 namespaces, all managed declaratively through Git. Push a change, it deploys. Break something, <span class='inlinecode'>git revert</span>. Cluster dies, rebuild from the repo.</span><br />
<br />
<span>All the config lives here:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s'>codeberg.org/snonux/conf/f3s</a><br />
<br />
<span>ArgoCD Application manifests organized by namespace:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/argocd-apps'>codeberg.org/snonux/conf/f3s/argocd-apps</a><br />
<br />
<span>I can&#39;t imagine going back to running Helm commands manually.</span><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD (You are currently reading this)</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</title>
        <link href="https://foo.zone/gemfeed/2026-04-02-distributed-systems-simulator-part-3.html" />
        <id>https://foo.zone/gemfeed/2026-04-02-distributed-systems-simulator-part-3.html</id>
        <updated>2026-04-02T00:00:00+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the third and final blog post of the Distributed Systems Simulator series. This part covers advanced simulation examples, the Raft consensus protocol, and the extensible Protocol API.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='distributed-systems-simulator---part-3-advanced-examples-and-protocol-api'>Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</h1><br />
<br />
<span class='quote'>Published at 2026-04-02T00:00:00+03:00</span><br />
<br />
<span>This is the third and final blog post of the Distributed Systems Simulator series. This part covers advanced simulation examples, the Raft consensus protocol, and the extensible Protocol API.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/ds-sim'>ds-sim on Codeberg (modernized, English-translated version)</a><br />
<br />
<span>These are all the posts of this series:</span><br />
<br />
<a class='textlink' href='./2026-03-31-distributed-systems-simulator-part-1.html'>2026-03-31 Distributed Systems Simulator - Part 1: Introduction and GUI</a><br />
<a class='textlink' href='./2026-04-01-distributed-systems-simulator-part-2.html'>2026-04-01 Distributed Systems Simulator - Part 2: Built-in Protocols</a><br />
<a class='textlink' href='./2026-04-02-distributed-systems-simulator-part-3.html'>2026-04-02 Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API (You are currently reading this)</a><br />
<br />
<a href='./distributed-systems-simulator/ds-sim-screenshot.png'><img alt='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' title='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' src='./distributed-systems-simulator/ds-sim-screenshot.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#distributed-systems-simulator---part-3-advanced-examples-and-protocol-api'>Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</a></li>
<li>⇢ <a href='#additional-examples'>Additional Examples</a></li>
<li>⇢ ⇢ <a href='#lamport-and-vector-timestamps'>Lamport and Vector Timestamps</a></li>
<li>⇢ ⇢ <a href='#simulating-slow-connections'>Simulating Slow Connections</a></li>
<li>⇢ ⇢ <a href='#raft-consensus-failover'>Raft Consensus Failover</a></li>
<li>⇢ <a href='#protocol-api'>Protocol API</a></li>
<li>⇢ ⇢ <a href='#class-hierarchy'>Class Hierarchy</a></li>
<li>⇢ ⇢ <a href='#implementing-a-custom-protocol'>Implementing a Custom Protocol</a></li>
<li>⇢ ⇢ <a href='#available-api-methods'>Available API Methods</a></li>
<li>⇢ ⇢ <a href='#example-reliable-multicast-implementation'>Example: Reliable Multicast Implementation</a></li>
<li>⇢ <a href='#project-statistics'>Project Statistics</a></li>
</ul><br />
<h2 style='display: inline' id='additional-examples'>Additional Examples</h2><br />
<br />
<h3 style='display: inline' id='lamport-and-vector-timestamps'>Lamport and Vector Timestamps</h3><br />
<br />
<a href='./distributed-systems-simulator/lamport-timestamps.png'><img alt='Visualization: Lamport Timestamps displayed on the Berkeley Algorithm simulation. Each event on a process bar shows its Lamport timestamp as a number in parentheses. The timestamps increase monotonically and are updated according to the Lamport clock rules when messages are sent and received between P1, P2, and P3.' title='Visualization: Lamport Timestamps displayed on the Berkeley Algorithm simulation. Each event on a process bar shows its Lamport timestamp as a number in parentheses. The timestamps increase monotonically and are updated according to the Lamport clock rules when messages are sent and received between P1, P2, and P3.' src='./distributed-systems-simulator/lamport-timestamps.png' /></a><br />
<br />
<span class='quote'>"For many purposes, it is sufficient that all machines agree on the same time. It is not necessary that this time also agrees with real time, like every hour announced on the radio... For a certain class of algorithms, only the internal consistency of clocks is important." - Andrew Tanenbaum</span><br />
<br />
<span>Clocks that provide such a time are also known as logical clocks. Two implementations are realized in the simulator: Lamport timestamps and vector timestamps.</span><br />
<br />
<span>After activating the Lamport time switch in expert mode, the current Lamport timestamp appears at every event of a process. Each process has its own Lamport timestamp that is incremented when a message is sent or received. Each message carries the current Lamport time t_l(i) of the sending process i. When another process j receives this message, its Lamport timestamp t_l(j) is recalculated as:</span><br />
<br />
<pre>
t_l(j) := 1 + max(t_l(j), t_l(i))
</pre>
<br />
<span>The larger Lamport time of the sender and receiver process is used and then incremented by 1. After the Berkeley simulation shown here, P1 has Lamport timestamp 16, P2 has 14, and P3 has 15.</span><br />
<br />
<a href='./distributed-systems-simulator/vector-timestamps.png'><img alt='Visualization: Vector Timestamps displayed on the same Berkeley Algorithm simulation. Each event shows its vector timestamp as a tuple (v1,v2,v3) representing the known state of all three processes. The tuples grow as processes communicate and merge their knowledge of each other&#39;s progress.' title='Visualization: Vector Timestamps displayed on the same Berkeley Algorithm simulation. Each event shows its vector timestamp as a tuple (v1,v2,v3) representing the known state of all three processes. The tuples grow as processes communicate and merge their knowledge of each other&#39;s progress.' src='./distributed-systems-simulator/vector-timestamps.png' /></a><br />
<br />
<span>With the active vector time switch, all vector timestamps are displayed. Like the Lamport timestamp, each message includes the current vector timestamp of the sending process. With n participating processes, the vector timestamp v has size n. Each participating process i has its own index, accessible via v(i). When v is the vector timestamp of the receiving process j and w is the vector timestamp of the sending process, the new local vector timestamp of process j is calculated as follows:</span><br />
<br />
<pre>
for (i := 0; i &lt; n; i++) {
    if (i = j) {
        v(i)++;
    } else if (v(i) &lt; w(i)) {
        v(i) := w(i);
    }
}
</pre>
<br />
<span>By default, the vector timestamp is only incremented when a message is sent or received. In both cases, the sender and receiver each increment their own index in the vector timestamp by 1. Upon receiving a message, the local vector timestamp is then compared with the sender&#39;s, and the larger value is taken for all indices.</span><br />
<br />
<span>After the simulation, P1 has vector timestamp (8,10,6), P2 has (6,10,6), and P3 has (6,10,8).</span><br />
<br />
<span>The simulation settings include boolean variables "Lamport times affect all events" and "Vector times affect all events" (both default to false). When set to true, all events (not just message send/receive) will update the timestamps.</span><br />
<br />
<h3 style='display: inline' id='simulating-slow-connections'>Simulating Slow Connections</h3><br />
<br />
<a href='./distributed-systems-simulator/slow-connection.png'><img alt='Visualization: Slow connection simulation comparing Internal Synchronization (P1) and Christian&#39;s Method (P3) with P2 as server. P3 has high transmission times (2000-8000ms) simulating a slow network connection. P1 synchronizes to 21446ms (error: -1446ms) while P3 only reaches 16557ms (error: -3443ms), showing how slow connections degrade synchronization quality.' title='Visualization: Slow connection simulation comparing Internal Synchronization (P1) and Christian&#39;s Method (P3) with P2 as server. P3 has high transmission times (2000-8000ms) simulating a slow network connection. P1 synchronizes to 21446ms (error: -1446ms) while P3 only reaches 16557ms (error: -3443ms), showing how slow connections degrade synchronization quality.' src='./distributed-systems-simulator/slow-connection.png' /></a><br />
<br />
<span>The simulator can also simulate slow connections to a specific process. This example revisits the comparison of Internal Synchronization (P1) and Christian&#39;s Method (P3), with P2 serving both. In this scenario, P3 has a poor network connection, so messages to and from P3 always require a longer transmission time.</span><br />
<br />
<span>P3&#39;s minimum transmission time is set to 2000ms and maximum to 8000ms, while P1 and P2 keep the defaults (500ms/2000ms). The simulation duration is 20000ms. With the "Average transmission times" setting enabled, the effective transmission time for messages involving P3 is:</span><br />
<br />
<pre>
1/2 * (rand(500,2000) + rand(2000,8000)) = 1/2 * rand(2500,10000) = rand(1250,5000)ms
</pre>
<br />
<span>Because P3 starts a new request before receiving the answer to its previous one, and because it always associates server responses with its most recently sent request, its RTT calculations become incorrect on each round, and its local time is poorly synchronized. P1 synchronizes to 21446ms (error: -1446ms) while P3 only reaches 16557ms (error: -3443ms).</span><br />
<br />
<h3 style='display: inline' id='raft-consensus-failover'>Raft Consensus Failover</h3><br />
<br />
<a href='./distributed-systems-simulator/raft-consensus-failover.png'><img alt='Screenshot: A 60-second Raft simulation with three processes. P1 starts as the initial leader, crashes at 3500ms, later recovers, P2 wins the reelection and remains leader, and P3 crashes later. The blue and red message lines show the continuing heartbeat and acknowledgment traffic during and after failover.' title='Screenshot: A 60-second Raft simulation with three processes. P1 starts as the initial leader, crashes at 3500ms, later recovers, P2 wins the reelection and remains leader, and P3 crashes later. The blue and red message lines show the continuing heartbeat and acknowledgment traffic during and after failover.' src='./distributed-systems-simulator/raft-consensus-failover.png' /></a><br />
<br />
<span>While modernizing ds-sim, I also added a simplified Raft Consensus example. The simulation is intentionally small: three processes, one initial leader, one crash, a clean reelection, a recovery of the old leader, and then another crash later in the run. This makes it possible to see the most important Raft transitions without being overwhelmed by cluster size.</span><br />
<br />
<span>The event log tells a very readable story. At <span class='inlinecode'>0ms</span>, <span class='inlinecode'>P1</span> starts as the initial leader in <span class='inlinecode'>term 0</span>. It immediately sends a heartbeat and an <span class='inlinecode'>appendEntry</span> message carrying the log entry <span class='inlinecode'>cmd1</span>. <span class='inlinecode'>P2</span> joins at <span class='inlinecode'>100ms</span>, <span class='inlinecode'>P3</span> at <span class='inlinecode'>1700ms</span>, and both acknowledge the leader&#39;s traffic. At that point the cluster is healthy: one leader, two followers, successful heartbeats, and successful log replication.</span><br />
<br />
<span>At <span class='inlinecode'>3500ms</span>, <span class='inlinecode'>P1</span> crashes. The followers still process the last in-flight messages, but once the election timeout expires, <span class='inlinecode'>P2</span> becomes a candidate and sends a <span class='inlinecode'>voteRequest</span> for <span class='inlinecode'>term 1</span>. <span class='inlinecode'>P3</span> grants that vote, and at <span class='inlinecode'>9395ms</span> the log records the decisive line:</span><br />
<br />
<pre>
009395ms: PID: 2; ... Leader elected by majority vote: process 2 (term 1)
</pre>
<br />
<span>That transition is followed immediately by new heartbeats and a new <span class='inlinecode'>appendEntry</span>, which is exactly what you want to see in a Raft simulation: leadership is not just declared, it is exercised.</span><br />
<br />
<span>At <span class='inlinecode'>12002ms</span>, the old leader <span class='inlinecode'>P1</span> recovers. Importantly, it does not try to reclaim control. Instead, it receives heartbeats from <span class='inlinecode'>P2</span> and answers with <span class='inlinecode'>heartbeatAck</span> messages, rejoining the cluster as a follower. That is one of the most useful teaching moments in the log, because it makes the term-based leadership model concrete: the recovered node does not become leader again just because it used to be one.</span><br />
<br />
<span>At <span class='inlinecode'>20000ms</span>, <span class='inlinecode'>P3</span> crashes. The cluster continues running with <span class='inlinecode'>P2</span> as leader and <span class='inlinecode'>P1</span> as follower for the rest of the 60-second simulation. The log remains dominated by periodic heartbeats from <span class='inlinecode'>P2</span> and acknowledgments from <span class='inlinecode'>P1</span>, showing that the system stays stable even after a second failure.</span><br />
<br />
<span>This single scenario demonstrates several core Raft properties in one replay:</span><br />
<br />
<ul>
<li>Stable startup leadership</li>
<li>Heartbeats and follower acknowledgments</li>
<li>Log replication</li>
<li>Leader failure detection</li>
<li>Majority-based reelection</li>
<li>Safe reintegration of a recovered former leader</li>
<li>Continued service after a later follower crash</li>
</ul><br />
<span>It is also a good example of why a simulator is useful for distributed systems. In a real production system, reconstructing this sort of sequence would require stitching together logs from multiple nodes. Here, the message flow, the crashes, the recoveries, and the Lamport/vector timestamps are all visible in one place.</span><br />
<br />
<h2 style='display: inline' id='protocol-api'>Protocol API</h2><br />
<br />
<span>The simulator was designed from the ground up to be extensible. Users can implement their own protocols in Java by extending the <span class='inlinecode'>VSAbstractProtocol</span> base class. Each protocol has its own class in the <span class='inlinecode'>protocols.implementations</span> package.</span><br />
<br />
<h3 style='display: inline' id='class-hierarchy'>Class Hierarchy</h3><br />
<br />
<pre>
VSAbstractEvent
  +-- VSAbstractProtocol (base class for all protocols)
        +-- VSDummyProtocol
        +-- VSPingPongProtocol
        +-- VSBroadcastProtocol
        +-- VSInternalTimeSyncProtocol
        +-- VSExternalTimeSyncProtocol
        +-- VSBerkeleyTimeProtocol
        +-- VSOnePhaseCommitProtocol
        +-- VSTwoPhaseCommitProtocol
        +-- VSBasicMulticastProtocol
        +-- VSReliableMulticastProtocol
</pre>
<br />
<h3 style='display: inline' id='implementing-a-custom-protocol'>Implementing a Custom Protocol</h3><br />
<br />
<span>Each protocol class must implement the following methods:</span><br />
<br />
<ul>
<li>A public constructor: Must specify whether the client or the server initiates requests, using <span class='inlinecode'>VSAbstractProtocol.HAS_ON_CLIENT_START</span> or <span class='inlinecode'>VSAbstractProtocol.HAS_ON_SERVER_START</span>.</li>
<li><span class='inlinecode'>onClientInit()</span> / <span class='inlinecode'>onServerInit()</span>: Called once before the protocol is first used. Used to initialize protocol variables and attributes via the VSPrefs methods (e.g. <span class='inlinecode'>initVector</span>, <span class='inlinecode'>initLong</span>). Variables initialized this way appear in the process editor and can be configured by the user.</li>
<li><span class='inlinecode'>onClientReset()</span> / <span class='inlinecode'>onServerReset()</span>: Called each time the simulation is reset.</li>
<li><span class='inlinecode'>onClientStart()</span> / <span class='inlinecode'>onServerStart()</span>: Called when the client/server initiates a request. Typically creates and sends a <span class='inlinecode'>VSMessage</span> object.</li>
<li><span class='inlinecode'>onClientRecv(VSMessage)</span> / <span class='inlinecode'>onServerRecv(VSMessage)</span>: Called when a message arrives.</li>
<li><span class='inlinecode'>onClientSchedule()</span> / <span class='inlinecode'>onServerSchedule()</span>: Called when a scheduled alarm fires.</li>
<li><span class='inlinecode'>toString()</span>: Optional. Customizes log output for this protocol.</li>
</ul><br />
<h3 style='display: inline' id='available-api-methods'>Available API Methods</h3><br />
<br />
<span>Methods inherited from <span class='inlinecode'>VSAbstractProtocol</span>:</span><br />
<br />
<ul>
<li><span class='inlinecode'>sendMessage(VSMessage message)</span>: Sends a protocol message (automatically updates Lamport and Vector timestamps)</li>
<li><span class='inlinecode'>hasOnServerStart()</span>: Whether the server or client initiates requests</li>
<li><span class='inlinecode'>isServer()</span> / <span class='inlinecode'>isClient()</span>: Whether the current process has the protocol activated as server/client</li>
<li><span class='inlinecode'>scheduleAt(long time)</span>: Creates an alarm that fires at the given local process time, triggering <span class='inlinecode'>onClientSchedule()</span> or <span class='inlinecode'>onServerSchedule()</span></li>
<li><span class='inlinecode'>removeSchedules()</span>: Cancels all pending alarms in the current context</li>
<li><span class='inlinecode'>getNumProcesses()</span>: Returns the total number of processes in the simulation</li>
</ul><br />
<span>Process methods available via the inherited <span class='inlinecode'>process</span> attribute:</span><br />
<br />
<ul>
<li><span class='inlinecode'>getTime()</span> / <span class='inlinecode'>setTime(long)</span>: Get/set the local process time</li>
<li><span class='inlinecode'>getGlobalTime()</span>: Get the current global simulation time</li>
<li><span class='inlinecode'>getClockVariance()</span> / <span class='inlinecode'>setClockVariance(float)</span>: Get/set the clock drift</li>
<li><span class='inlinecode'>getLamportTime()</span> / <span class='inlinecode'>setLamportTime(long)</span>: Get/set the Lamport timestamp</li>
<li><span class='inlinecode'>getVectorTime()</span> / <span class='inlinecode'>updateVectorTime(VSVectorTime)</span>: Get/update the vector timestamp</li>
<li><span class='inlinecode'>getProcessID()</span>: Get the process PID</li>
<li><span class='inlinecode'>isCrashed()</span> / <span class='inlinecode'>isCrashed(boolean)</span>: Check or set crash state</li>
<li><span class='inlinecode'>getRandomPercentage()</span>: Get a random value between 0 and 100</li>
</ul><br />
<span>Message methods (<span class='inlinecode'>VSMessage</span>):</span><br />
<br />
<ul>
<li><span class='inlinecode'>new VSMessage()</span>: Create a new message</li>
<li><span class='inlinecode'>getMessageID()</span>: Get the message NID</li>
<li><span class='inlinecode'>setBoolean(key, value)</span> / <span class='inlinecode'>getBoolean(key)</span>: Set/get boolean data</li>
<li><span class='inlinecode'>setInteger(key, value)</span> / <span class='inlinecode'>getInteger(key)</span>: Set/get integer data</li>
<li><span class='inlinecode'>setLong(key, value)</span> / <span class='inlinecode'>getLong(key)</span>: Set/get long data</li>
<li><span class='inlinecode'>setString(key, value)</span> / <span class='inlinecode'>getString(key)</span>: Set/get string data</li>
<li><span class='inlinecode'>getSendingProcess()</span>: Get a reference to the sending process</li>
<li><span class='inlinecode'>isServerMessage()</span>: Whether it&#39;s a server or client message</li>
</ul><br />
<h3 style='display: inline' id='example-reliable-multicast-implementation'>Example: Reliable Multicast Implementation</h3><br />
<br />
<span>Here is a condensed example showing key parts of the Reliable Multicast Protocol implementation:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">public</font></u></b> <b><u><font color="#000000">class</font></u></b> VSReliableMulticastProtocol <b><u><font color="#000000">extends</font></u></b> VSAbstractProtocol {
    <b><u><font color="#000000">public</font></u></b> VSReliableMulticastProtocol() {
        <i><font color="silver">// The client initiates requests</font></i>
        <b><u><font color="#000000">super</font></u></b>(VSAbstractProtocol.HAS_ON_CLIENT_START);
        <b><u><font color="#000000">super</font></u></b>.setClassname(<b><u><font color="#000000">super</font></u></b>.getClass().toString());
    }

    <b><u><font color="#000000">private</font></u></b> ArrayList&lt;Integer&gt; pids;

    <i><font color="silver">// Initialize protocol variables (editable in the process editor)</font></i>
    <b><u><font color="#000000">public</font></u></b> <b><font color="#000000">void</font></b> onClientInit() {
        Vector&lt;Integer&gt; vec = <b><u><font color="#000000">new</font></u></b> Vector&lt;Integer&gt;();
        vec.add(<font color="#000000">1</font>); vec.add(<font color="#000000">3</font>);
        <b><u><font color="#000000">super</font></u></b>.initVector(<font color="#808080">"pids"</font>, vec, <font color="#808080">"PIDs of participating processes"</font>);
        <b><u><font color="#000000">super</font></u></b>.initLong(<font color="#808080">"timeout"</font>, <font color="#000000">2500</font>, <font color="#808080">"Time until resend"</font>, <font color="#808080">"ms"</font>);
    }

    <i><font color="silver">// Send multicast to all servers that haven't ACKed yet</font></i>
    <b><u><font color="#000000">public</font></u></b> <b><font color="#000000">void</font></b> onClientStart() {
        <b><u><font color="#000000">if</font></u></b> (pids.size() != <font color="#000000">0</font>) {
            <b><font color="#000000">long</font></b> timeout = <b><u><font color="#000000">super</font></u></b>.getLong(<font color="#808080">"timeout"</font>) + process.getTime();
            <b><u><font color="#000000">super</font></u></b>.scheduleAt(timeout);
            VSMessage message = <b><u><font color="#000000">new</font></u></b> VSMessage();
            message.setBoolean(<font color="#808080">"isMulticast"</font>, <b><u><font color="#000000">true</font></u></b>);
            <b><u><font color="#000000">super</font></u></b>.sendMessage(message);
        }
    }

    <i><font color="silver">// Handle ACK from a server</font></i>
    <b><u><font color="#000000">public</font></u></b> <b><font color="#000000">void</font></b> onClientRecv(VSMessage recvMessage) {
        <b><u><font color="#000000">if</font></u></b> (pids.size() != <font color="#000000">0</font> &amp;&amp; recvMessage.getBoolean(<font color="#808080">"isAck"</font>)) {
            Integer pid = recvMessage.getIntegerObj(<font color="#808080">"pid"</font>);
            <b><u><font color="#000000">if</font></u></b> (pids.contains(pid))
                pids.remove(pid);
            <b><u><font color="#000000">super</font></u></b>.log(<font color="#808080">"ACK from Process "</font> + pid + <font color="#808080">" received!"</font>);
            <b><u><font color="#000000">if</font></u></b> (pids.size() == <font color="#000000">0</font>) {
                <b><u><font color="#000000">super</font></u></b>.log(<font color="#808080">"ACKs from all processes received!"</font>);
                <b><u><font color="#000000">super</font></u></b>.removeSchedules();
            }
        }
    }

    <i><font color="silver">// Retry on timeout</font></i>
    <b><u><font color="#000000">public</font></u></b> <b><font color="#000000">void</font></b> onClientSchedule() { onClientStart(); }
}
</pre>
<br />
<h2 style='display: inline' id='project-statistics'>Project Statistics</h2><br />
<br />
<span>The original VS-Sim project (August 2008) was written in Java 6 and consisted of:</span><br />
<br />
<ul>
<li>61 source files across 12 Java packages</li>
<li>Approximately 15,710 lines of code</li>
<li>2.2 MB of generated Javadoc documentation</li>
<li>142 KB compiled JAR file</li>
<li>10 built-in protocols</li>
<li>163 configurable settings</li>
</ul><br />
<span>The modernized successor ds-sim (version 1.1.0) has been updated to Java 21 and translated to English:</span><br />
<br />
<ul>
<li>146 source files (117 main + 29 test) across 19 Java packages</li>
<li>Approximately 27,900 lines of code (22,400 main + 5,500 test)</li>
<li>12 built-in protocols</li>
<li>208 unit tests</li>
<li>269 configurable settings</li>
</ul><br />
<a class='textlink' href='https://codeberg.org/snonux/ds-sim'>ds-sim source code on Codeberg</a><br />
<a class='textlink' href='https://codeberg.org/snonux/vs-sim'>vs-sim source code on Codeberg (original German version, 2008)</a><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2026-03-01-loadbars-0.13.0-released.html'>2026-03-01 Loadbars 0.13.0 released</a><br />
<a class='textlink' href='./2022-12-24-ultrarelearning-java-my-takeaways.html'>2022-12-24 (Re)learning Java - My takeaways</a><br />
<a class='textlink' href='./2022-03-06-the-release-of-dtail-4.0.0.html'>2022-03-06 The release of DTail 4.0.0</a><br />
<a class='textlink' href='./2016-11-20-object-oriented-programming-with-ansi-c.html'>2016-11-20 Object oriented programming with ANSI C</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Distributed Systems Simulator - Part 2: Built-in Protocols</title>
        <link href="https://foo.zone/gemfeed/2026-04-01-distributed-systems-simulator-part-2.html" />
        <id>https://foo.zone/gemfeed/2026-04-01-distributed-systems-simulator-part-2.html</id>
        <updated>2026-04-01T00:00:00+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the second blog post of the Distributed Systems Simulator series. This part covers all 10 built-in protocols with examples.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='distributed-systems-simulator---part-2-built-in-protocols'>Distributed Systems Simulator - Part 2: Built-in Protocols</h1><br />
<br />
<span class='quote'>Published at 2026-04-01T00:00:00+03:00</span><br />
<br />
<span>This is the second blog post of the Distributed Systems Simulator series. This part covers all 10 built-in protocols with examples.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/ds-sim'>ds-sim on Codeberg (modernized, English-translated version)</a><br />
<br />
<span>These are all the posts of this series:</span><br />
<br />
<a class='textlink' href='./2026-03-31-distributed-systems-simulator-part-1.html'>2026-03-31 Distributed Systems Simulator - Part 1: Introduction and GUI</a><br />
<a class='textlink' href='./2026-04-01-distributed-systems-simulator-part-2.html'>2026-04-01 Distributed Systems Simulator - Part 2: Built-in Protocols (You are currently reading this)</a><br />
<a class='textlink' href='./2026-04-02-distributed-systems-simulator-part-3.html'>2026-04-02 Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</a><br />
<br />
<a href='./distributed-systems-simulator/ds-sim-screenshot.png'><img alt='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' title='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' src='./distributed-systems-simulator/ds-sim-screenshot.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#distributed-systems-simulator---part-2-built-in-protocols'>Distributed Systems Simulator - Part 2: Built-in Protocols</a></li>
<li>⇢ <a href='#protocols-and-examples'>Protocols and Examples</a></li>
<li>⇢ ⇢ <a href='#dummy-protocol'>Dummy Protocol</a></li>
<li>⇢ ⇢ <a href='#ping-pong-protocol'>Ping-Pong Protocol</a></li>
<li>⇢ ⇢ <a href='#broadcast-protocol'>Broadcast Protocol</a></li>
<li>⇢ ⇢ <a href='#internal-synchronization-protocol'>Internal Synchronization Protocol</a></li>
<li>⇢ ⇢ <a href='#christian-s-method-external-synchronization'>Christian&#39;s Method (External Synchronization)</a></li>
<li>⇢ ⇢ <a href='#berkeley-algorithm'>Berkeley Algorithm</a></li>
<li>⇢ ⇢ <a href='#one-phase-commit-protocol'>One-Phase Commit Protocol</a></li>
<li>⇢ ⇢ <a href='#two-phase-commit-protocol'>Two-Phase Commit Protocol</a></li>
<li>⇢ ⇢ <a href='#basic-multicast-protocol'>Basic Multicast Protocol</a></li>
<li>⇢ ⇢ <a href='#reliable-multicast-protocol'>Reliable Multicast Protocol</a></li>
</ul><br />
<h2 style='display: inline' id='protocols-and-examples'>Protocols and Examples</h2><br />
<br />
<span>The simulator comes with 10 built-in protocols. As described earlier, protocols are distinguished between server-side and client-side. Servers can respond to client messages, and clients can respond to server messages. Each process can support any number of protocols on both the client and server side. Users can also implement their own protocols using the simulator&#39;s Protocol API (see the Protocol API section).</span><br />
<br />
<span>The program directory contains a <span class='inlinecode'>saved-simulations</span> folder with example simulations for each protocol as serialized <span class='inlinecode'>.dat</span> files.</span><br />
<br />
<h3 style='display: inline' id='dummy-protocol'>Dummy Protocol</h3><br />
<br />
<span>The Dummy Protocol serves only as a template for creating custom protocols. When using the Dummy Protocol, only log messages are output when events occur. No further actions are performed.</span><br />
<br />
<h3 style='display: inline' id='ping-pong-protocol'>Ping-Pong Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/ping-pong.png'><img alt='Visualization: The Ping-Pong Protocol showing two processes (P1 and P2) exchanging messages in a continuous back-and-forth pattern. Blue lines represent delivered messages bouncing between the process bars over a 15-second simulation.' title='Visualization: The Ping-Pong Protocol showing two processes (P1 and P2) exchanging messages in a continuous back-and-forth pattern. Blue lines represent delivered messages bouncing between the process bars over a 15-second simulation.' src='./distributed-systems-simulator/ping-pong.png' /></a><br />
<br />
<span>In the Ping-Pong Protocol, two processes -- Client P1 and Server P2 -- constantly send messages back and forth. The Ping-Pong client starts the first request, to which the server responds to the client. The client then responds again, and so on. Each message includes a counter that is incremented at each station and logged in the log window.</span><br />
<br />
<pre>
Programmed Ping-Pong Events:

| Time (ms) | PID | Event                          |
|-----------|-----|--------------------------------|
| 0         | 1   | Ping-Pong Client activate      |
| 0         | 2   | Ping-Pong Server activate      |
| 0         | 1   | Ping-Pong Client request start |
</pre>
<br />
<span>It is important that Process 1 activates its Ping-Pong client before starting a Ping-Pong client request. Before a process can start a request, it must have the corresponding protocol activated. This also applies to all other protocols.</span><br />
<br />
<span>**Ping-Pong Storm Variant**</span><br />
<br />
<a href='./distributed-systems-simulator/ping-pong-storm.png'><img alt='Visualization: The Ping-Pong Storm variant with three processes. P1 is the client, P2 and P3 are both servers. The visualization shows an exponentially growing number of messages as each client message generates two server responses, creating a dense web of blue and green message lines.' title='Visualization: The Ping-Pong Storm variant with three processes. P1 is the client, P2 and P3 are both servers. The visualization shows an exponentially growing number of messages as each client message generates two server responses, creating a dense web of blue and green message lines.' src='./distributed-systems-simulator/ping-pong-storm.png' /></a><br />
<br />
<span>By adding a third process P3 as an additional Ping-Pong server, a Ping-Pong "Storm" can be realized. Since every client message now receives two server responses, the number of messages doubles with each round, creating an exponential message flood.</span><br />
<br />
<pre>
Programmed Ping-Pong Storm Events:

| Time (ms) | PID | Event                          |
|-----------|-----|--------------------------------|
| 0         | 1   | Ping-Pong Client activate      |
| 0         | 2   | Ping-Pong Server activate      |
| 0         | 3   | Ping-Pong Server activate      |
| 0         | 1   | Ping-Pong Client request start |
</pre>
<br />
<h3 style='display: inline' id='broadcast-protocol'>Broadcast Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/broadcast.png'><img alt='Visualization: The Broadcast Protocol with 6 processes (P1-P6). Dense crossing message lines show how a broadcast from P1 propagates to all processes, with each process re-broadcasting to others. Blue lines indicate delivered messages, green lines indicate messages still in transit.' title='Visualization: The Broadcast Protocol with 6 processes (P1-P6). Dense crossing message lines show how a broadcast from P1 propagates to all processes, with each process re-broadcasting to others. Blue lines indicate delivered messages, green lines indicate messages still in transit.' src='./distributed-systems-simulator/broadcast.png' /></a><br />
<br />
<span>The Broadcast Protocol behaves similarly to the Ping-Pong Protocol. The difference is that the protocol tracks -- using a unique Broadcast ID -- which messages have already been sent. Each process re-broadcasts all received messages to others, provided it has not already sent them.</span><br />
<br />
<span>In this case, no distinction is made between client and server, so that the same action is performed when a message arrives at either side. This makes it possible, using multiple processes, to create a broadcast. P1 is the client and starts a request at 0ms and 2500ms. The simulation duration is exactly 5000ms. Since a client can only receive server messages and a server can only receive client messages, every process in this simulation is both server and client.</span><br />
<br />
<pre>
Programmed Broadcast Events:

| Time (ms) | PID | Event                            |
|-----------|-----|----------------------------------|
| 0         | 1-6 | Broadcast Client activate        |
| 0         | 1-6 | Broadcast Server activate        |
| 0         | 1   | Broadcast Client request start   |
| 2500      | 1   | Broadcast Client request start   |
</pre>
<br />
<h3 style='display: inline' id='internal-synchronization-protocol'>Internal Synchronization Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/int-sync.png'><img alt='Visualization: Internal Synchronization with 2 processes. P1 (client, clock drift 0.1) shows a faster-running clock reaching 15976ms by simulation end. The blue message lines show P1 periodically synchronizing with P2 (server, no drift), with the time corrections visible as slight adjustments in P1&#39;s timeline.' title='Visualization: Internal Synchronization with 2 processes. P1 (client, clock drift 0.1) shows a faster-running clock reaching 15976ms by simulation end. The blue message lines show P1 periodically synchronizing with P2 (server, no drift), with the time corrections visible as slight adjustments in P1&#39;s timeline.' src='./distributed-systems-simulator/int-sync.png' /></a><br />
<br />
<span>The Internal Synchronization Protocol is used for synchronizing the local process time, which can be applied when a process time is running incorrectly due to clock drift. When the client wants to synchronize its (incorrect) local process time t_c with a server, it sends a client request. The server responds with its own local process time t_s, allowing the client to calculate a new, more accurate time for itself.</span><br />
<br />
<span>After receiving the server response, the client P1 calculates its new local process time as:</span><br />
<br />
<pre>
t_c := t_s + 1/2 * (t&#39;_min + t&#39;_max)
</pre>
<br />
<span>This synchronizes P1&#39;s local time with an error of less than 1/2 * (t&#39;_max - t&#39;_min), where t&#39;_min and t&#39;_max are the assumed minimum and maximum transmission times configured in the protocol settings.</span><br />
<br />
<span>In the example, the client process has a clock drift of 0.1 and the server has 0.0. The client starts a request at local process times 0ms, 5000ms, and 10000ms. By simulation end, P1&#39;s time is synchronized to 15976ms (an error of -976ms from the global 15000ms).</span><br />
<br />
<pre>
Programmed Internal Sync Events:

| Time (ms) | PID | Event                              |
|-----------|-----|------------------------------------|
| 0         | 1   | Internal Sync Client activate      |
| 0         | 2   | Internal Sync Server activate      |
| 0         | 1   | Internal Sync Client request start |
| 5000      | 1   | Internal Sync Client request start |
| 10000     | 1   | Internal Sync Client request start |
</pre>
<br />
<span>Protocol variables (client-side):</span><br />
<br />
<ul>
<li>Min. transmission time (Long: 500): The assumed t&#39;_min in milliseconds</li>
<li>Max. transmission time (Long: 2000): The assumed t&#39;_max in milliseconds</li>
</ul><br />
<span>These can differ from the actual message transmission times t_min and t_max, allowing simulation of scenarios where the protocol is misconfigured and large synchronization errors occur.</span><br />
<br />
<h3 style='display: inline' id='christian-s-method-external-synchronization'>Christian&#39;s Method (External Synchronization)</h3><br />
<br />
<a href='./distributed-systems-simulator/christians.png'><img alt='Visualization: Comparison of Internal Synchronization (P1) and Christian&#39;s Method (P3) with P2 as shared server. Both P1 and P3 have clock drift 0.1. The visualization shows P1 synchronized to 14567ms (error: -433ms) while P3 synchronized to 15539ms (error: -539ms), demonstrating the different accuracy of the two methods.' title='Visualization: Comparison of Internal Synchronization (P1) and Christian&#39;s Method (P3) with P2 as shared server. Both P1 and P3 have clock drift 0.1. The visualization shows P1 synchronized to 14567ms (error: -433ms) while P3 synchronized to 15539ms (error: -539ms), demonstrating the different accuracy of the two methods.' src='./distributed-systems-simulator/christians.png' /></a><br />
<br />
<span>Christian&#39;s Method uses the RTT (Round Trip Time) to approximate the transmission time of individual messages. When the client wants to synchronize its local time t_c with a server, it sends a request and measures the RTT t_rtt until the server response arrives. The server response contains the local process time t_s from the moment the server sent the response. The client then calculates its new local time as:</span><br />
<br />
<pre>
t_c := t_s + 1/2 * t_rtt
</pre>
<br />
<span>The accuracy is +/- (1/2 * t_rtt - u_min) where u_min is a lower bound for message transmission time.</span><br />
<br />
<span>The visualization compares both synchronization methods side by side: P1 uses Internal Synchronization and P3 uses Christian&#39;s Method, with P2 serving both. Both P1 and P3 have clock drift 0.1. In this particular run, Internal Synchronization achieved a better result (-433ms error vs. -539ms), though results vary between runs due to random transmission times.</span><br />
<br />
<pre>
Programmed Comparison Events:

| Time (ms) | PID | Event                                |
|-----------|-----|--------------------------------------|
| 0         | 1   | Internal Sync Client activate        |
| 0         | 1   | Internal Sync Client request start   |
| 0         | 2   | Christian&#39;s Server activate          |
| 0         | 2   | Internal Sync Server activate        |
| 0         | 3   | Christian&#39;s Client activate          |
| 0         | 3   | Christian&#39;s Client request start     |
| 5000      | 1   | Internal Sync Client request start   |
| 5000      | 3   | Christian&#39;s Client request start     |
| 10000     | 1   | Internal Sync Client request start   |
| 10000     | 3   | Christian&#39;s Client request start     |
</pre>
<br />
<h3 style='display: inline' id='berkeley-algorithm'>Berkeley Algorithm</h3><br />
<br />
<a href='./distributed-systems-simulator/berkeley.png'><img alt='Visualization: The Berkeley Algorithm with 3 processes. P2 is the server (coordinator) sending time requests to clients P1 and P3. After collecting responses, P2 calculates correction values and sends them back. Final times show P1=16823ms, P2=14434ms, P3=13892ms -- all brought closer together through averaging.' title='Visualization: The Berkeley Algorithm with 3 processes. P2 is the server (coordinator) sending time requests to clients P1 and P3. After collecting responses, P2 calculates correction values and sends them back. Final times show P1=16823ms, P2=14434ms, P3=13892ms -- all brought closer together through averaging.' src='./distributed-systems-simulator/berkeley.png' /></a><br />
<br />
<span>The Berkeley Algorithm is another method for synchronizing local clocks. This is the first protocol where the server initiates the requests. The server acts as a coordinator. The client processes are passive and must wait until a server request arrives. The server must know which client processes participate in the protocol, which is configured in the server&#39;s protocol settings.</span><br />
<br />
<span>When the server wants to synchronize its local time t_s and the process times t_i of the clients (i = 1,...,n), it sends a server request. n is the number of participating clients. The clients then send their local process times back to the server. The server measures the RTTs r_i for all client responses.</span><br />
<br />
<span>After all responses are received, the server sets its own time to the average t_avg of all known process times (including its own). The transmission time of a client response is estimated as half the RTT:</span><br />
<br />
<pre>
t_avg := 1/(n+1) * (t_s + SUM(r_i/2 + t_i))
t_s := t_avg
</pre>
<br />
<span>The server then calculates a correction value k_i := t_avg - t_i for each client and sends it back. Each client sets its new time to t&#39;_i := t&#39;_i + k_i.</span><br />
<br />
<pre>
Programmed Berkeley Events:

| Time (ms) | PID | Event                             |
|-----------|-----|-----------------------------------|
| 0         | 1   | Berkeley Client activate          |
| 0         | 2   | Berkeley Server activate          |
| 0         | 3   | Berkeley Client activate          |
| 0         | 2   | Berkeley Server request start     |
| 7500      | 2   | Berkeley Server request start     |
</pre>
<br />
<span>Protocol variables (server-side):</span><br />
<br />
<ul>
<li>PIDs of participating processes (Integer[]: [1,3]): The PIDs of the Berkeley client processes. The protocol will not work if a non-existent PID is specified or if the process does not support the Berkeley protocol on the client side.</li>
</ul><br />
<h3 style='display: inline' id='one-phase-commit-protocol'>One-Phase Commit Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/one-phase-commit.png'><img alt='Visualization: The One-Phase Commit Protocol with 3 processes. P1 crashes at 1000ms (shown in red) and recovers at 5000ms. P2 (server) periodically sends commit requests. The red lines show lost messages during P1&#39;s crash period, while blue lines show successful message exchanges after recovery.' title='Visualization: The One-Phase Commit Protocol with 3 processes. P1 crashes at 1000ms (shown in red) and recovers at 5000ms. P2 (server) periodically sends commit requests. The red lines show lost messages during P1&#39;s crash period, while blue lines show successful message exchanges after recovery.' src='./distributed-systems-simulator/one-phase-commit.png' /></a><br />
<br />
<span>The One-Phase Commit Protocol is designed to move any number of clients to a commit. In practice, this could be creating or deleting a file that each client has a local copy of. The server is the coordinator and initiates the commit request. The server periodically resends the commit request until every client has acknowledged it. For this purpose, the PIDs of all participating client processes and a timer for resending must be configured.</span><br />
<br />
<span>In the example, P1 and P3 are clients and P2 is the server. P1 crashes at 1000ms and recovers at 5000ms. The first two commit requests fail to reach P1 due to its crash. Only the third attempt succeeds. Each client acknowledges a commit request only once.</span><br />
<br />
<pre>
Programmed One-Phase Commit Events:

| Time (ms) | PID | Event                                  |
|-----------|-----|----------------------------------------|
| 0         | 1   | 1-Phase Commit Client activate         |
| 0         | 2   | 1-Phase Commit Server activate         |
| 0         | 3   | 1-Phase Commit Client activate         |
| 0         | 2   | 1-Phase Commit Server request start    |
| 1000      | 1   | Process crash                          |
| 5000      | 1   | Process revival                        |
</pre>
<br />
<span>Protocol variables (server-side):</span><br />
<br />
<ul>
<li>Time until resend (Long: timeout = 2500): Milliseconds to wait before resending the commit request</li>
<li>PIDs of participating processes (Integer[]: pids = [1,3]): The client process PIDs that should commit</li>
</ul><br />
<h3 style='display: inline' id='two-phase-commit-protocol'>Two-Phase Commit Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/two-phase-commit.png'><img alt='Visualization: The Two-Phase Commit Protocol with 3 processes. P2 (server) orchestrates a two-phase voting process with clients P1 and P3. The complex message pattern shows the voting phase followed by the commit/abort phase, with messages crossing between all three processes over a 10-second simulation.' title='Visualization: The Two-Phase Commit Protocol with 3 processes. P2 (server) orchestrates a two-phase voting process with clients P1 and P3. The complex message pattern shows the voting phase followed by the commit/abort phase, with messages crossing between all three processes over a 10-second simulation.' src='./distributed-systems-simulator/two-phase-commit.png' /></a><br />
<br />
<span>The Two-Phase Commit Protocol is an extension of the One-Phase Commit Protocol. The server first sends a request to all participating clients asking whether they want to commit. Each client responds with true or false. The server periodically retries until all results are collected. After receiving all votes, the server checks whether all clients voted true. If at least one client voted false, the commit process is aborted and a global result of false is sent to all clients. If all voted true, the global result true is sent. The global result is periodically resent until each client acknowledges receipt.</span><br />
<br />
<span>In the example, P1 and P3 are clients and P2 is the server. The server sends its first request at 0ms. Here both P1 and P3 vote true, so the commit proceeds.</span><br />
<br />
<pre>
Programmed Two-Phase Commit Events:

| Time (ms) | PID | Event                                  |
|-----------|-----|----------------------------------------|
| 0         | 1   | 2-Phase Commit Client activate         |
| 0         | 2   | 2-Phase Commit Server activate         |
| 0         | 3   | 2-Phase Commit Client activate         |
| 0         | 2   | 2-Phase Commit Server request start    |
</pre>
<br />
<span>Example log extract showing the two-phase voting process:</span><br />
<br />
<pre>
000000ms: PID 2: Message sent; ID: 94; Protocol: 2-Phase Commit
                  Boolean: wantVote=true
000905ms: PID 3: Message received; ID: 94; Protocol: 2-Phase Commit
000905ms: PID 3: Message sent; ID: 95; Protocol: 2-Phase Commit
                  Integer: pid=3; Boolean: isVote=true; vote=true
000905ms: PID 3: Vote true sent
001880ms: PID 2: Message received; ID: 95; Protocol: 2-Phase Commit
001880ms: PID 2: Vote from Process 3 received! Result: true
001947ms: PID 1: Message received; ID: 94; Protocol: 2-Phase Commit
001947ms: PID 1: Vote true sent
003137ms: PID 2: Votes from all participating processes received!
                  Global result: true
003137ms: PID 2: Message sent; ID: 99; Protocol: 2-Phase Commit
                  Boolean: isVoteResult=true; voteResult=true
004124ms: PID 1: Global vote result received. Result: true
006051ms: PID 2: All participants have acknowledged the vote
010000ms: Simulation ended
</pre>
<br />
<span>Protocol variables (server-side):</span><br />
<br />
<ul>
<li>Time until resend (Long: timeout = 2500): Milliseconds to wait before resending</li>
<li>PIDs of participating processes (Integer[]: pids = [1,3]): Client PIDs that should vote and commit</li>
</ul><br />
<span>Protocol variables (client-side):</span><br />
<br />
<ul>
<li>Commit probability (Integer: ackProb = 50): The probability in percent that the client votes true (for commit)</li>
</ul><br />
<h3 style='display: inline' id='basic-multicast-protocol'>Basic Multicast Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/basic-multicast.png'><img alt='Visualization: The Basic Multicast Protocol with 3 processes. P2 (client) sends periodic multicast messages to servers P1 and P3. P3 crashes at 3000ms (shown in red) and recovers at 6000ms. Red lines indicate lost messages, blue lines show delivered messages. Some messages to P1 are also lost due to the 30% message loss probability.' title='Visualization: The Basic Multicast Protocol with 3 processes. P2 (client) sends periodic multicast messages to servers P1 and P3. P3 crashes at 3000ms (shown in red) and recovers at 6000ms. Red lines indicate lost messages, blue lines show delivered messages. Some messages to P1 are also lost due to the 30% message loss probability.' src='./distributed-systems-simulator/basic-multicast.png' /></a><br />
<br />
<span>The Basic Multicast Protocol is very simple. The client always initiates the request, which represents a simple multicast message. The Basic Multicast servers serve only to receive the message. No acknowledgments are sent. The client P2 sends a multicast message every 2500ms to servers P1 and P3.</span><br />
<br />
<span>P1 can only receive multicast messages after 2500ms because it does not support the protocol before then. P3 is crashed from 3000ms to 6000ms and also cannot receive messages during that time. Each process has a 30% message loss probability, so some messages are lost in transit (shown in red).</span><br />
<br />
<span>In this example, the 3rd multicast message to P3 and the 5th and 6th messages to P1 were lost. Only the 4th multicast message reached both destinations.</span><br />
<br />
<pre>
Programmed Basic Multicast Events:

| Time (ms) | PID | Event                                  |
|-----------|-----|----------------------------------------|
| 0         | 2   | Basic Multicast Client activate        |
| 0         | 3   | Basic Multicast Server activate        |
| 0         | 2   | Basic Multicast Client request start   |
| 2500      | 1   | Basic Multicast Server activate        |
| 2500      | 2   | Basic Multicast Client request start   |
| 3000      | 3   | Process crash                          |
| 5000      | 2   | Basic Multicast Client request start   |
| 6000      | 3   | Process revival                        |
| 7500      | 2   | Basic Multicast Client request start   |
| 10000     | 2   | Basic Multicast Client request start   |
| 12500     | 2   | Basic Multicast Client request start   |
</pre>
<br />
<h3 style='display: inline' id='reliable-multicast-protocol'>Reliable Multicast Protocol</h3><br />
<br />
<a href='./distributed-systems-simulator/reliable-multicast.png'><img alt='Visualization: The Reliable Multicast Protocol with 3 processes. P2 (client) sends multicast messages to servers P1 and P3, retrying until acknowledgments are received from all servers. P3 crashes at 3000ms and recovers at 10000ms. Red lines show lost messages, blue lines show delivered ones. Despite failures, all servers eventually receive and acknowledge the multicast.' title='Visualization: The Reliable Multicast Protocol with 3 processes. P2 (client) sends multicast messages to servers P1 and P3, retrying until acknowledgments are received from all servers. P3 crashes at 3000ms and recovers at 10000ms. Red lines show lost messages, blue lines show delivered ones. Despite failures, all servers eventually receive and acknowledge the multicast.' src='./distributed-systems-simulator/reliable-multicast.png' /></a><br />
<br />
<span>In the Reliable Multicast Protocol, the client periodically resends its multicast message until it has received an acknowledgment from all participating servers. After each retry, the client "forgets" which servers have already acknowledged, so each new attempt must be acknowledged again by all participants.</span><br />
<br />
<span>In the example, P2 is the client and P1 and P3 are the servers. At 0ms, the client initiates its multicast message. The message loss probability is set to 30% on all processes. The client needs exactly 5 attempts until successful delivery:</span><br />
<br />
<ul>
<li>Attempt 1: P1 doesn&#39;t support the protocol yet. P3 receives the message but its ACK is lost.</li>
<li>Attempt 2: The message to P1 is lost. P3 receives it but is crashed and can&#39;t process it.</li>
<li>Attempt 3: P1 receives the message and ACKs successfully. The message to P3 is lost.</li>
<li>Attempt 4: P1 receives and ACKs again. P3 receives it but is still crashed.</li>
<li>Attempt 5: Both P1 and P3 receive the message and ACK successfully.</li>
</ul><br />
<pre>
Programmed Reliable Multicast Events:

| Time (ms) | PID | Event                                    |
|-----------|-----|------------------------------------------|
| 0         | 3   | Reliable Multicast Server activate       |
| 0         | 2   | Reliable Multicast Client activate       |
| 0         | 2   | Reliable Multicast Client request start  |
| 2500      | 1   | Reliable Multicast Server activate       |
| 3000      | 3   | Process crash                            |
| 10000     | 3   | Process revival                          |
</pre>
<br />
<span>Example log extract:</span><br />
<br />
<pre>
000000ms: PID 2: Reliable Multicast Client activated
000000ms: PID 2: Message sent; ID: 280; Protocol: Reliable Multicast
                  Boolean: isMulticast=true
000000ms: PID 3: Reliable Multicast Server activated
001590ms: PID 3: Message received; ID: 280; Protocol: Reliable Multicast
001590ms: PID 3: ACK sent
002500ms: PID 1: Reliable Multicast Server activated
002500ms: PID 2: Message sent; ID: 282; Protocol: Reliable Multicast
                  Boolean: isMulticast=true
003000ms: PID 3: Crashed
005000ms: PID 2: Message sent; ID: 283; Protocol: Reliable Multicast
005952ms: PID 1: Message received; ID: 283
005952ms: PID 1: ACK sent
007937ms: PID 2: ACK from Process 1 received!
...
011813ms: PID 2: ACK from Process 3 received!
011813ms: PID 2: ACKs from all participating processes received!
015000ms: Simulation ended
</pre>
<br />
<span>Protocol variables (server-side):</span><br />
<br />
<ul>
<li>Time until resend (Long: timeout = 2500): Milliseconds to wait before resending the multicast</li>
<li>PIDs of participating processes (Integer[]: pids = [1,3]): Server PIDs that should receive the multicast</li>
</ul><br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2026-04-02-distributed-systems-simulator-part-3.html'>Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</a><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2026-03-01-loadbars-0.13.0-released.html'>2026-03-01 Loadbars 0.13.0 released</a><br />
<a class='textlink' href='./2022-12-24-ultrarelearning-java-my-takeaways.html'>2022-12-24 (Re)learning Java - My takeaways</a><br />
<a class='textlink' href='./2022-03-06-the-release-of-dtail-4.0.0.html'>2022-03-06 The release of DTail 4.0.0</a><br />
<a class='textlink' href='./2016-11-20-object-oriented-programming-with-ansi-c.html'>2016-11-20 Object oriented programming with ANSI C</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Distributed Systems Simulator - Part 1: Introduction and GUI</title>
        <link href="https://foo.zone/gemfeed/2026-03-31-distributed-systems-simulator-part-1.html" />
        <id>https://foo.zone/gemfeed/2026-03-31-distributed-systems-simulator-part-1.html</id>
        <updated>2026-03-31T00:00:00+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the first blog post of the Distributed Systems Simulator series, written for the recent v1.1.0 release. It explores the Java-based Distributed Systems Simulator program I created as my diploma thesis at the Aachen University of Applied Sciences (August 2008). The simulator offers both built-in implementations of common distributed systems algorithms and an extensible framework that allows researchers and practitioners to implement and test their own custom protocols within the simulation environment.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='distributed-systems-simulator---part-1-introduction-and-gui'>Distributed Systems Simulator - Part 1: Introduction and GUI</h1><br />
<br />
<span class='quote'>Published at 2026-03-31T00:00:00+03:00</span><br />
<br />
<span>This is the first blog post of the Distributed Systems Simulator series, written for the recent v1.1.0 release. It explores the Java-based Distributed Systems Simulator program I created as my diploma thesis at the Aachen University of Applied Sciences (August 2008). The simulator offers both built-in implementations of common distributed systems algorithms and an extensible framework that allows researchers and practitioners to implement and test their own custom protocols within the simulation environment.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/ds-sim'>ds-sim on Codeberg (modernized, English-translated version)</a><br />
<br />
<span>These are all the posts of this series:</span><br />
<br />
<a class='textlink' href='./2026-03-31-distributed-systems-simulator-part-1.html'>2026-03-31 Distributed Systems Simulator - Part 1: Introduction and GUI (You are currently reading this)</a><br />
<a class='textlink' href='./2026-04-01-distributed-systems-simulator-part-2.html'>2026-04-01 Distributed Systems Simulator - Part 2: Built-in Protocols</a><br />
<a class='textlink' href='./2026-04-02-distributed-systems-simulator-part-3.html'>2026-04-02 Distributed Systems Simulator - Part 3: Advanced Examples and Protocol API</a><br />
<br />
<a href='./distributed-systems-simulator/ds-sim-screenshot.png'><img alt='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' title='Screenshot: The Distributed Systems Simulator running a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' src='./distributed-systems-simulator/ds-sim-screenshot.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#distributed-systems-simulator---part-1-introduction-and-gui'>Distributed Systems Simulator - Part 1: Introduction and GUI</a></li>
<li>⇢ <a href='#motivation'>Motivation</a></li>
<li>⇢ <a href='#installation'>Installation</a></li>
<li>⇢ <a href='#fundamentals'>Fundamentals</a></li>
<li>⇢ ⇢ <a href='#clientserver-model'>Client/Server Model</a></li>
<li>⇢ ⇢ <a href='#processes-and-their-roles'>Processes and Their Roles</a></li>
<li>⇢ ⇢ <a href='#messages'>Messages</a></li>
<li>⇢ ⇢ <a href='#local-and-global-clocks'>Local and Global Clocks</a></li>
<li>⇢ ⇢ <a href='#events'>Events</a></li>
<li>⇢ ⇢ <a href='#protocols'>Protocols</a></li>
<li>⇢ <a href='#graphical-user-interface-gui'>Graphical User Interface (GUI)</a></li>
<li>⇢ ⇢ <a href='#simple-mode'>Simple Mode</a></li>
<li>⇢ ⇢ <a href='#the-menu-bar'>The Menu Bar</a></li>
<li>⇢ ⇢ <a href='#the-toolbar'>The Toolbar</a></li>
<li>⇢ ⇢ <a href='#the-visualization'>The Visualization</a></li>
<li>⇢ ⇢ <a href='#color-differentiation'>Color Differentiation</a></li>
<li>⇢ ⇢ <a href='#the-sidebar'>The Sidebar</a></li>
<li>⇢ ⇢ <a href='#the-log-window'>The Log Window</a></li>
<li>⇢ ⇢ <a href='#expert-mode'>Expert Mode</a></li>
<li>⇢ ⇢ <a href='#configuration-settings'>Configuration Settings</a></li>
</ul><br />
<h2 style='display: inline' id='motivation'>Motivation</h2><br />
<br />
<span>Distributed systems are complex—interactions between nodes, network partitions, failure scenarios are hard to debug in production. A simulator lets you experiment with architectures, observe how systems behave under failure, and learn consensus algorithms, replication strategies, and fault tolerance in a controlled, repeatable environment. No operational overhead, no real infrastructure—just focused exploration of system design.</span><br />
<br />
<span>In the literature, one can find many different definitions of a distributed system. Many of these definitions differ from each other, making it difficult to find a single definition that stands alone as the correct one. Andrew Tanenbaum and Maarten van Steen chose the following loose characterization for describing a distributed system:</span><br />
<br />
<span class='quote'>"A distributed system is a collection of independent computers that appears to its users as a single coherent system" - Andrew Tanenbaum</span><br />
<br />
<span>The user only needs to interact with the local computer in front of them, while the software of the local computer ensures smooth communication with the other participating computers in the distributed system.</span><br />
<br />
<span>This thesis aims to make distributed systems easier to understand from a different angle. Instead of the end-user perspective, it focuses on the functional methods of protocols and their processes, making all relevant events of a distributed system transparent.</span><br />
<br />
<span>To achieve this, I developed a simulator, particularly for teaching and learning at the University of Applied Sciences Aachen. Protocols from distributed systems with their most important influencing factors can be replicated through simulations. At the same time, there&#39;s room for personal experiments—no restriction to a fixed number of protocols. Users can design their own.</span><br />
<br />
<span>The original simulator (VS-Sim) was written in Java 6 in 2008 with a German-language UI. In 2025, I revamped and modernized it as ds-sim: translated the entire codebase and UI from German to English, migrated the build system from hand-rolled Ant scripts to Maven, upgraded from Java 6 to Java 21 (adopting sealed class hierarchies, record types, formatted strings, pattern matching), introduced a proper exception hierarchy and consistent error handling, added comprehensive Javadoc documentation, implemented a headless testing framework (208 unit tests covering core components, the event system, and all protocol implementations), reorganized the project structure to follow standard Maven conventions, and added architecture documentation. Total: 199 files, over 15,000 lines of new code. Back in 2008, I wrote every line by hand in Vim. For the 2025 modernization, Claude Code did most of the heavy lifting—translation, refactoring, test generation, documentation. Times have changed.</span><br />
<br />
<h2 style='display: inline' id='installation'>Installation</h2><br />
<br />
<span>The modernized ds-sim requires Java 21 or higher and Maven 3.8 or higher.</span><br />
<br />
<pre>
# Clone the repository
git clone https://codeberg.org/snonux/ds-sim.git
cd ds-sim

# Set JAVA_HOME if needed (e.g. on Fedora Linux)
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk

# Build the project
mvn clean package

# Run the simulator
java -jar target/ds-sim-*.jar
</pre>
<br />
<span>For a faster development build without running tests:</span><br />
<br />
<pre>
mvn package -DskipTests
</pre>
<br />
<span>After building, the following artifacts are available in the <span class='inlinecode'>target/</span> directory:</span><br />
<br />
<ul>
<li><span class='inlinecode'>ds-sim-1.1.0.jar</span> - Executable JAR with all dependencies bundled</li>
<li><span class='inlinecode'>original-ds-sim-1.1.0.jar</span> - JAR without dependencies</li>
</ul><br />
<span>The project also includes 208 unit tests that can be run with <span class='inlinecode'>mvn test</span>. Example simulation files for all built-in protocols are included in the <span class='inlinecode'>saved-simulations/</span> directory.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/ds-sim'>ds-sim source code on Codeberg</a><br />
<br />
<h2 style='display: inline' id='fundamentals'>Fundamentals</h2><br />
<br />
<span>For basic understanding, some fundamentals are explained below. A deeper exploration will follow in later chapters.</span><br />
<br />
<h3 style='display: inline' id='clientserver-model'>Client/Server Model</h3><br />
<br />
<pre>
+-----------------------------------------+
|                                         |
|   +--------+         +--------+         |
|   | Client |&lt;-------&gt;| Server |         |
|   +--------+         +--------+         |
|                                         |
|       Sending of Messages               |
|                                         |
+-----------------------------------------+

Figure 1.1: Client/Server Model
</pre>
<br />
<span>The simulator is based on the client/server principle. Each simulation typically consists of a participating client and a server that communicate with each other via messages (see Fig. 1.1). In complex simulations, multiple clients and/or servers can also participate.</span><br />
<br />
<h3 style='display: inline' id='processes-and-their-roles'>Processes and Their Roles</h3><br />
<br />
<span>A distributed system is simulated using processes. Each process takes on one or more roles. For example, one process can take on the role of a client and another process the role of a server. The possibility of assigning both client and server roles to a process simultaneously is also provided. A process could also take on the roles of multiple servers and clients simultaneously. To identify a process, each one has a unique Process Identification Number (PID).</span><br />
<br />
<h3 style='display: inline' id='messages'>Messages</h3><br />
<br />
<span>In a distributed system, it must be possible to send messages. A message can be sent by a client or server process and can have any number of recipients. The content of a message depends on the protocol used. What is meant by a protocol will be covered later. To identify a message, each message has a unique Message Identification Number (NID).</span><br />
<br />
<h3 style='display: inline' id='local-and-global-clocks'>Local and Global Clocks</h3><br />
<br />
<span>In a simulation, there is exactly one global clock. It represents the current and always correct time. A global clock never goes wrong.</span><br />
<br />
<span>Additionally, each participating process has its own local clock. It represents the current time of the respective process. Unlike the global clock, local clocks can display an incorrect time. If the process time is not globally correct (not equal to the global time, or displays an incorrect time), then it was either reset during a simulation, or it is running incorrectly due to clock drift. The clock drift indicates by what factor the clock is running incorrectly. This will be discussed in more detail later.</span><br />
<br />
<pre>
+---------------------+     +---------------------+
|    Process 1        |     |    Process 2        |
|                     |     |                     |
| +-----------------+ |     | +-----------------+ |
| |Server Protocol A| |     | |Client Protocol A| |
| +-----------------+ |     | +-----------------+ |
|                     |     |                     |
| +-----------------+ |     +---------------------+
| |Client Protocol B| |
| +-----------------+ |     +---------------------+
|                     |     |    Process 3        |
+---------------------+     |                     |
                            | +-----------------+ |
                            | |Server Protocol B| |
                            | +-----------------+ |
                            |                     |
                            +---------------------+

Figure 1.2: Client/Server Protocols
</pre>
<br />
<span>In addition to normal clocks, vector timestamps and Lamport&#39;s logical clocks are also of interest. For vector and Lamport times, there are no global equivalents here, unlike normal time. Concrete examples of Lamport and vector times will be covered later in the "Additional Examples" section.</span><br />
<br />
<h3 style='display: inline' id='events'>Events</h3><br />
<br />
<span>A simulation consists of the sequential execution of finitely many events. For example, there can be an event that causes a process to send a message. A process crash event would also be conceivable. Each event occurs at a specific point in time. Events with the same occurrence time are executed directly one after another by the simulator. However, this does not hinder the simulator&#39;s users, as events are executed in parallel from their perspective.</span><br />
<br />
<span>Two main types of events are distinguished: programmable events and non-programmable events. Programmable events can be programmed and edited in the event editor, and their occurrence times depend on the local process clocks or the global clock. Non-programmable events, on the other hand, cannot be programmed in the event editor and do not occur because of a specific time, but due to other circumstances such as:</span><br />
<br />
<ul>
<li>Message receive events: Triggered when a message arrives at a recipient process</li>
<li>Protocol schedule events (alarms): Triggered by a timer set by a protocol, e.g. for retransmission timeouts</li>
<li>Random events: Such as random process crashes based on configured crash probability</li>
</ul><br />
<h3 style='display: inline' id='protocols'>Protocols</h3><br />
<br />
<span>A simulation also consists of the application of protocols. It has already been mentioned that a process can take on the roles of servers and/or clients. For each server and client role, the associated protocol must also be specified. A protocol defines how a client and a server send messages, and how they react when a message arrives. A protocol also determines what data is contained in a message. A process only processes a received message if it understands the respective protocol.</span><br />
<br />
<span>In Figure 1.2, 3 processes are shown. Process 1 supports protocol "A" on the server side and protocol "B" on the client side. Process 2 supports protocol "A" on the client side and Process 3 supports protocol "B" on the server side. This means that Process 1 can communicate with Process 2 via protocol "A" and with Process 3 via protocol "B". Processes 2 and 3 are incompatible with each other and cannot process messages received from each other.</span><br />
<br />
<span>Clients cannot communicate with clients, and servers cannot communicate with servers. For communication, at least one client and one server are always required. However, this restriction can be circumvented by having processes support a given protocol on both the server and client sides (see Broadcast Protocol later).</span><br />
<br />
<h2 style='display: inline' id='graphical-user-interface-gui'>Graphical User Interface (GUI)</h2><br />
<br />
<h3 style='display: inline' id='simple-mode'>Simple Mode</h3><br />
<br />
<a href='./distributed-systems-simulator/ds-sim-screenshot2.png'><img alt='Screenshot: The simulator showing the settings dialog. The visualization area displays process bars with message lines between them. The settings window allows configuring simulation parameters like number of processes, simulation duration, clock drift, message loss probability, and more.' title='Screenshot: The simulator showing the settings dialog. The visualization area displays process bars with message lines between them. The settings window allows configuring simulation parameters like number of processes, simulation duration, clock drift, message loss probability, and more.' src='./distributed-systems-simulator/ds-sim-screenshot2.png' /></a><br />
<br />
<span>The simulator requires JDK 21 and can be started with the command <span class='inlinecode'>java -jar target/ds-sim-VERSION.jar</span></span><br />
<br />
<span>The simulator then presents itself with a main window. To create a new simulation, select "New Simulation" from the "File" menu, after which the settings window for the new simulation appears. The individual options will be discussed in more detail later, and for now, only the default settings will be used.</span><br />
<br />
<span>By default, the simulator starts in "simple mode". There is also an "expert mode", which will be discussed later.</span><br />
<br />
<h3 style='display: inline' id='the-menu-bar'>The Menu Bar</h3><br />
<br />
<span>In the File menu, you can create new simulations or close the currently open simulation. New simulations open by default in a new tab. However, you can also open or close new simulation windows that have their own tabs. Each tab contains a simulation that is completely independent from the others. This allows any number of simulations to be run in parallel. The menu items "Open", "Save" and "Save As" are used for loading and saving simulations.</span><br />
<br />
<span>Through the Edit menu, users can access the simulation settings, which will be discussed in more detail later. This menu also lists all participating processes for editing. If the user selects a process there, the corresponding process editor opens. The Simulator menu offers the same options as the toolbar, which is described in the next section.</span><br />
<br />
<span>Some menu items are only accessible when a simulation has already been created or loaded in the current window.</span><br />
<br />
<h3 style='display: inline' id='the-toolbar'>The Toolbar</h3><br />
<br />
<span>The toolbar is located at the top left of the simulator. The toolbar contains the functions most frequently needed by users. The toolbar offers four different functions:</span><br />
<br />
<ul>
<li>Reset simulation: can only be activated when the simulation has been paused or has finished</li>
<li>Repeat simulation: cannot be activated if the simulation has not yet been started</li>
<li>Pause simulation: can only be activated when the simulation is currently running</li>
<li>Start simulation: can only be activated when the simulation is not currently running and has not yet finished</li>
</ul><br />
<h3 style='display: inline' id='the-visualization'>The Visualization</h3><br />
<br />
<span>The graphical simulation visualization is located in the center right. The X-axis shows the time in milliseconds, and all participating processes are listed on the Y-axis. The demo simulation ends after exactly 15 seconds. The visualization shows processes (with PIDs 1, 2, and 3), each with its own horizontal black bar. On these process bars, users can read the respective local process time. The vertical red line represents the global simulation time.</span><br />
<br />
<span>The process bars also serve as start and end points for messages. For example, if Process 1 sends a message to Process 2, a line is drawn from one process bar to the other. Messages that a process sends to itself are not visualized but are logged in the log window (more on this later).</span><br />
<br />
<span>Another way to open a process editor is to left-click on the process bar belonging to the process. A right-click, on the other hand, opens a popup window with additional options. A process can only be forced to crash or be revived via the popup menu during a running simulation.</span><br />
<br />
<span>In general, the number of processes can vary as desired. The simulation duration is at least 5 and at most 120 seconds. The simulation only ends when the global time reaches the specified simulation end time (here 15 seconds), not when a local process time reaches this end time.</span><br />
<br />
<h3 style='display: inline' id='color-differentiation'>Color Differentiation</h3><br />
<br />
<span>Colors help to better interpret the processes of a simulation. By default, processes (process bars) and messages are displayed with the following colors (these are only the default colors, which can be changed via the settings):</span><br />
<br />
<pre>
Process Colors:
  Black   - The simulation is not currently running
  Green   - The process is running normally
  Orange  - The mouse is over the process bar
  Red     - The process has crashed

Message Colors:
  Green   - The message is still in transit
  Blue    - The message has successfully reached its destination
  Red     - The message was lost
</pre>
<br />
<h3 style='display: inline' id='the-sidebar'>The Sidebar</h3><br />
<br />
<span>The sidebar is used to program process events. At the top, the process to be managed is selected (here with PID 1). In this process selection, there is also the option to select "All Processes", which displays all programmed events of all processes simultaneously. "Local events" are those events that occur when a certain local time of the associated process has been reached. The event table below lists all programmed events along with their occurrence times and PIDs.</span><br />
<br />
<span>To create a new event, the user can either right-click on a process bar and select "Insert local event", or select an event below the event table, enter the event occurrence time in the text field below, and click "Apply".</span><br />
<br />
<span>Right-clicking on the event editor allows you to either copy or delete all selected events. Using the Ctrl key, multiple events can be selected simultaneously. The entries in the Time and PID columns can be edited afterwards. This provides a convenient way to move already programmed events to a different time or assign them to a different process. However, users should ensure that they press the Enter key after changing the event occurrence time, otherwise the change will be ineffective.</span><br />
<br />
<span>In addition to the Events tab, the sidebar has another tab called "Variables". Behind this tab is the process editor of the currently selected process. There, all variables of the process can be edited, providing another way to access a process editor.</span><br />
<br />
<h3 style='display: inline' id='the-log-window'>The Log Window</h3><br />
<br />
<span>The log window (at the bottom) logs all occurring events in chronological order. At the beginning of each log entry, the global time in milliseconds is always logged. For each process, its local times as well as the Lamport and vector timestamps are also listed. After the time information, additional details are provided, such as which message was sent with what content and which protocol it belongs to. This will be demonstrated later with examples.</span><br />
<br />
<pre>
000000ms: New Simulation
000000ms: New Process; PID: 1; Local Time: 000000ms; Lamport time: 0; Vector time: (0,0,0)
000000ms: New Process; PID: 2; Local Time: 000000ms; Lamport time: 0; Vector time: (0,0,0)
000000ms: New Process; PID: 3; Local Time: 000000ms; Lamport time: 0; Vector time: (0,0,0)
</pre>
<br />
<span>By deactivating the logging switch, message logging can be temporarily disabled. With logging deactivated, no new messages are written to the log window. After reactivating the switch, all omitted messages are subsequently written to the window. Deactivated logging can lead to improved simulator performance.</span><br />
<br />
<h3 style='display: inline' id='expert-mode'>Expert Mode</h3><br />
<br />
<a href='./distributed-systems-simulator/ds-sim-screenshot.png'><img alt='Screenshot: The Distributed Systems Simulator in expert mode, showing a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' title='Screenshot: The Distributed Systems Simulator in expert mode, showing a Broadcast protocol simulation with 6 processes. The visualization shows message lines between process bars, with blue indicating delivered messages and green indicating messages still in transit.' src='./distributed-systems-simulator/ds-sim-screenshot.png' /></a><br />
<br />
<span>The simulator can be operated in two different modes: simple mode and expert mode. The simulator starts in simple mode by default, so users don&#39;t have to deal with the simulator&#39;s full functionality all at once. Simple mode is clearer but offers fewer functions. Expert mode is more suitable for experienced users and accordingly offers more flexibility. Expert mode can be activated or deactivated via the switch of the same name below the log window or via the simulation settings.</span><br />
<br />
<span>In expert mode, the following additional features become available:</span><br />
<br />
<ul>
<li>Global events: In addition to local events, global events can now also be edited. Global events are triggered when a specific global simulation time is reached, rather than a local process time. This only makes a difference when local process times differ from the global time (e.g. due to clock drift).</li>
<li>Direct PID selection: The user can directly select the associated PID when programming a new event.</li>
<li>Lamport and Vector time switches: If the user activates one of these two switches, the Lamport or vector timestamps are displayed in the visualization. Only one can be active at a time to maintain clarity.</li>
<li>Anti-aliasing switch: Allows the user to activate or deactivate anti-aliasing for smoother graphics. Disabled by default for performance reasons.</li>
<li>Log filter: A regular expression filter (Java syntax) that makes it possible to filter only the essential data from the logs. For example, <span class='inlinecode'>"PID: (1|2)"</span> shows only log lines containing "PID: 1" or "PID: 2". The filter can be activated retroactively and during a running simulation.</li>
</ul><br />
<h3 style='display: inline' id='configuration-settings'>Configuration Settings</h3><br />
<br />
<span>The simulation settings window allows configuring many aspects of the simulation. Key settings include:</span><br />
<br />
<ul>
<li>Processes receive own messages (default: false): Whether processes can receive messages they sent to themselves.</li>
<li>Average message loss probabilities (default: true): Whether to average the loss probabilities of sender and receiver processes.</li>
<li>Average transmission times (default: true): Whether to average the transmission times of sender and receiver processes.</li>
<li>Show only relevant messages (default: true): Hides messages sent to processes that don&#39;t support the protocol.</li>
<li>Expert mode (default: false): Enables expert mode features.</li>
<li>Simulation speed (default: 0.5): The playback speed factor. A value of 1 means real-time, 0.5 means half speed.</li>
<li>Number of processes (default: 3): Can also be changed during simulation via right-click.</li>
<li>Simulation duration (default: 15s): Between 5 and 120 seconds.</li>
</ul><br />
<span>Each process also has individual settings:</span><br />
<br />
<ul>
<li>Clock drift (default: 0.0): By what factor the local clock deviates. A value of 0.0 means no deviation. A value of 1.0 means double speed. Values &gt; -1.0 are allowed.</li>
<li>Random crash probability (default: 0%): Probability that the process crashes randomly during the simulation.</li>
<li>Message loss probability (default: 0%): Probability that a message sent by this process is lost in transit.</li>
<li>Min/Max transmission time (default: 500ms/2000ms): The range for random message delivery times.</li>
</ul><br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2026-04-01-distributed-systems-simulator-part-2.html'>Distributed Systems Simulator - Part 2: Built-in Protocols</a><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2026-03-01-loadbars-0.13.0-released.html'>2026-03-01 Loadbars 0.13.0 released</a><br />
<a class='textlink' href='./2022-12-24-ultrarelearning-java-my-takeaways.html'>2022-12-24 (Re)learning Java - My takeaways</a><br />
<a class='textlink' href='./2022-03-06-the-release-of-dtail-4.0.0.html'>2022-03-06 The release of DTail 4.0.0</a><br />
<a class='textlink' href='./2016-11-20-object-oriented-programming-with-ansi-c.html'>2016-11-20 Object oriented programming with ANSI C</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>RCM: The Ruby Configuration Management DSL</title>
        <link href="https://foo.zone/gemfeed/2026-03-02-rcm-ruby-configuration-management-dsl.html" />
        <id>https://foo.zone/gemfeed/2026-03-02-rcm-ruby-configuration-management-dsl.html</id>
        <updated>2026-03-02T00:00:00+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>RCM is a tiny configuration management system written in Ruby. It gives me a small DSL for describing how I want my machines to look, then it applies the changes: create files and directories, manage packages, and make sure certain lines exist in configuration files. It's deliberately KISS and optimised for a single person's machines instead of a whole fleet.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='rcm-the-ruby-configuration-management-dsl'>RCM: The Ruby Configuration Management DSL</h1><br />
<br />
<span class='quote'>Published at 2026-03-02T00:00:00+02:00</span><br />
<br />
<span>RCM is a tiny configuration management system written in Ruby. It gives me a small DSL for describing how I want my machines to look, then it applies the changes: create files and directories, manage packages, and make sure certain lines exist in configuration files. It&#39;s deliberately KISS and optimised for a single person&#39;s machines instead of a whole fleet.</span><br />
<br />
<a href='./rcm-ruby-configuration-management-dsl/rcm-dsl.png'><img alt='RCM DSL in action' title='RCM DSL in action' src='./rcm-ruby-configuration-management-dsl/rcm-dsl.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#rcm-the-ruby-configuration-management-dsl'>RCM: The Ruby Configuration Management DSL</a></li>
<li>⇢ <a href='#why-i-built-rcm'>Why I built RCM</a></li>
<li>⇢ <a href='#how-the-dsl-feels'>How the DSL feels</a></li>
<li>⇢ ⇢ <a href='#keywords-and-resources'>Keywords and resources</a></li>
<li>⇢ ⇢ <a href='#files-directories-and-templates'>Files, directories, and templates</a></li>
<li>⇢ <a href='#how-ruby-s-metaprogramming-helps'>How Ruby&#39;s metaprogramming helps</a></li>
<li>⇢ ⇢ <a href='#a-bit-more-about-methodmissing'>A bit more about <span class='inlinecode'>method_missing</span></a></li>
<li>⇢ <a href='#ruby-metaprogramming-further-reading'>Ruby metaprogramming: further reading</a></li>
<li>⇢ <a href='#safety-dry-runs-and-debugging'>Safety, dry runs, and debugging</a></li>
<li>⇢ <a href='#rcm-vs-puppet-and-other-big-tools'>RCM vs Puppet and other big tools</a></li>
<li>⇢ <a href='#cutting-rcm-010'>Cutting RCM 0.1.0</a></li>
<li>⇢ <a href='#what-s-next'>What&#39;s next</a></li>
<li>⇢ <a href='#feature-overview-for-now'>Feature overview (for now)</a></li>
<li>⇢ ⇢ <a href='#template-rendering-into-a-file'>Template rendering into a file</a></li>
<li>⇢ ⇢ <a href='#ensuring-a-line-is-absent-from-a-file'>Ensuring a line is absent from a file</a></li>
<li>⇢ ⇢ <a href='#guarding-a-configuration-run-on-the-current-hostname'>Guarding a configuration run on the current hostname</a></li>
<li>⇢ ⇢ <a href='#creating-and-deleting-directories-and-purging-a-directory-tree'>Creating and deleting directories, and purging a directory tree</a></li>
<li>⇢ ⇢ <a href='#managing-file-and-directory-modes-and-ownership'>Managing file and directory modes and ownership</a></li>
<li>⇢ ⇢ <a href='#using-a-chained-more-natural-language-style-for-notifications'>Using a chained, more natural language style for notifications</a></li>
<li>⇢ ⇢ <a href='#touching-files-and-updating-their-timestamps'>Touching files and updating their timestamps</a></li>
<li>⇢ ⇢ <a href='#expressing-dependencies-between-notifications'>Expressing dependencies between notifications</a></li>
<li>⇢ ⇢ <a href='#creating-and-updating-symbolic-links'>Creating and updating symbolic links</a></li>
<li>⇢ ⇢ <a href='#detecting-duplicate-resource-definitions-at-configure-time'>Detecting duplicate resource definitions at configure time</a></li>
</ul><br />
<h2 style='display: inline' id='why-i-built-rcm'>Why I built RCM</h2><br />
<br />
<span>I&#39;ve used (and still use) the usual suspects in configuration management: Puppet, Ansible, etc. They are powerful, but also come with orchestration layers, agents, inventories, and a lot of moving parts. For my personal machines I wanted something smaller: one Ruby process, one configuration file, a few resource types, and good enough safety features.</span><br />
<br />
<span>I&#39;ve always been a fan of Ruby&#39;s metaprogramming features, and this project let me explore them in a focused, practical way.</span><br />
<br />
<span>Because of that metaprogramming support, Ruby is a great fit for DSLs. You can get very close to natural language without inventing a brand-new syntax. RCM leans into that: the goal is to read a configuration and understand what happens without jumping between multiple files or templating languages.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/rcm'>RCM repo on Codeberg</a><br />
<br />
<h2 style='display: inline' id='how-the-dsl-feels'>How the DSL feels</h2><br />
<br />
<span>An RCM configuration starts with a <span class='inlinecode'>configure</span> block. Inside it you declare resources (<span class='inlinecode'>file</span>, <span class='inlinecode'>package</span>, <span class='inlinecode'>given</span>, <span class='inlinecode'>notify</span>, …). RCM figures out dependencies between resources and runs them in the right order.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  given { hostname is :earth }

  file <font color="#808080">'/tmp/test/wg0.conf'</font> <b><u><font color="#000000">do</font></u></b>
    requires file <font color="#808080">'/etc/hosts.test'</font>
    manage directory
    from template
    <font color="#808080">'content with &lt;%= 1 + 2 %&gt;'</font>
  <b><u><font color="#000000">end</font></u></b>

  file <font color="#808080">'/etc/hosts.test'</font> <b><u><font color="#000000">do</font></u></b>
    line <font color="#808080">'192.168.1.101 earth'</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>Which would look like this when run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>% sudo ruby example.rb
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> dsl(<font color="#000000">0</font>) =&gt; Configuring...
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/tmp/test/wg0.conf'</font>) =&gt; Registered dependency on file(<font color="#808080">'/etc/hosts.test'</font>)
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/tmp/test/wg0.conf'</font>) =&gt; Evaluating...
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/etc/hosts.test'</font>) =&gt; Evaluating...
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/etc/hosts.test'</font>) =&gt; Writing file /etc/hosts.<b><u><font color="#000000">test</font></u></b>
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/tmp/test/wg0.conf'</font>) =&gt; Creating parent directory /tmp/test
INFO <font color="#000000">20260301</font>-<font color="#000000">213817</font> file(<font color="#808080">'/tmp/test/wg0.conf'</font>) =&gt; Writing file /tmp/test/wg<font color="#000000">0</font>.conf
</pre>
<br />
<span>The idea is that you describe the desired state and RCM worries about the steps. The <span class='inlinecode'>given</span> block can short‑circuit the whole run (for example, only run on a specific hostname). Each <span class='inlinecode'>file</span> resource can either manage a complete file (from a template) or just make sure individual lines are present.</span><br />
<br />
<h3 style='display: inline' id='keywords-and-resources'>Keywords and resources</h3><br />
<br />
<span>Under the hood, each DSL word is either a keyword or a resource:</span><br />
<br />
<ul>
<li><span class='inlinecode'>Keyword</span> is the base class for all top‑level DSL constructs.</li>
<li><span class='inlinecode'>Resource</span> is the base class for things RCM can manage (files, packages, and so on).</li>
</ul><br />
<span>Resources can declare dependencies with <span class='inlinecode'>requires</span>. Before a resource runs, RCM makes sure all its requirements are satisfied and only evaluates each resource once per run. This keeps the mental model simple even when you compose more complex configurations.</span><br />
<br />
<h3 style='display: inline' id='files-directories-and-templates'>Files, directories, and templates</h3><br />
<br />
<span>The <span class='inlinecode'>file</span> resource handles three common cases:</span><br />
<br />
<ul>
<li>Managing parent directories (<span class='inlinecode'>manage directory</span>) so you don&#39;t have to create them manually.</li>
<li>Rendering ERB templates (<span class='inlinecode'>from template</span>) so you can mix Ruby expressions into config files.</li>
<li>Ensuring individual lines exist (<span class='inlinecode'>line</span>) for the many "append this line if missing" situations.</li>
</ul><br />
<span>Every write operation creates a backup copy in <span class='inlinecode'>.rcmbackup/</span>, so you can always inspect what changed and roll back manually if needed.</span><br />
<br />
<h2 style='display: inline' id='how-ruby-s-metaprogramming-helps'>How Ruby&#39;s metaprogramming helps</h2><br />
<br />
<span>The nice thing about RCM is that the Ruby code you write in your configuration is not that different from the Ruby code inside RCM itself. The DSL is just a thin layer on top.</span><br />
<br />
<span>For example, when you write:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>file <font color="#808080">'/etc/hosts.test'</font> <b><u><font color="#000000">do</font></u></b>
  line <font color="#808080">'192.168.1.101 earth'</font>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>Ruby turns <span class='inlinecode'>file</span> into a method call and <span class='inlinecode'>&#39;/etc/hosts.test&#39;</span> into a normal argument. Inside RCM, that method builds a <span class='inlinecode'>File</span> resource object and stores it for later. The block you pass is just a Ruby block; RCM calls it with the file resource as <span class='inlinecode'>self</span>, so method calls like <span class='inlinecode'>line</span> configure that resource. There is no special parser here, just plain Ruby method and block dispatch.</span><br />
<br />
<span>The same goes for constructs like:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>given { hostname is :earth }
</pre>
<br />
<span>RCM uses Ruby&#39;s dynamic method lookup to interpret <span class='inlinecode'>hostname</span> and <span class='inlinecode'>is</span> in that block and to decide whether the rest of the configuration should run at all. Features like <span class='inlinecode'>method_missing</span>, blocks, and the ability to change what <span class='inlinecode'>self</span> means in a block make this kind of DSL possible with very little code. You still get all the power of Ruby (conditionals, loops, helper methods), but the surface reads like a small language of its own.</span><br />
<br />
<h3 style='display: inline' id='a-bit-more-about-methodmissing'>A bit more about <span class='inlinecode'>method_missing</span></h3><br />
<br />
<span><span class='inlinecode'>method_missing</span> is one of the key tools that make the RCM DSL feel natural. In plain Ruby, if you call a method that does not exist, you get a <span class='inlinecode'>NoMethodError</span>. But before Ruby raises that error, it checks whether the object implements <span class='inlinecode'>method_missing</span>. If it does, Ruby calls that instead and lets the object decide what to do.</span><br />
<br />
<span>In RCM, you can write things like:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>given { hostname is :earth }
</pre>
<br />
<span>Inside that block, calls such as <span class='inlinecode'>hostname</span> and <span class='inlinecode'>is</span> don&#39;t map to normal Ruby methods. Instead, RCM&#39;s DSL objects see those calls in <span class='inlinecode'>method_missing</span>, and interpret them as "check the current hostname" and "compare it to this symbol". This lets the DSL stay small and flexible: adding a new keyword can be as simple as handling another case in <span class='inlinecode'>method_missing</span>, without changing the Ruby syntax at all.</span><br />
<br />
<span>Put differently: you can write what looks like a tiny English sentence (<span class='inlinecode'>hostname is :earth</span>) and Ruby breaks it into method calls (<span class='inlinecode'>hostname</span>, then <span class='inlinecode'>is</span>) that RCM can interpret dynamically. Those "barewords" are not special syntax; they are just regular Ruby method names that the DSL catches and turns into configuration logic at runtime.</span><br />
<br />
<span>Here&#39;s a simplified sketch of how such a condition object could look in Ruby:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">class</font></u></b> HostCondition
  <b><u><font color="#000000">def</font></u></b> initialize
    <b><font color="#000000">@current_hostname</font></b> = Socket.gethostname.to_sym
  <b><u><font color="#000000">end</font></u></b>

  <b><u><font color="#000000">def</font></u></b> method_missing(name, *args, &amp;)
    <b><u><font color="#000000">case</font></u></b> name
    <b><u><font color="#000000">when</font></u></b> :hostname
      <b><font color="#000000">@left</font></b> = <b><font color="#000000">@current_hostname</font></b>
      <b><u><font color="#000000">self</font></u></b>               <i><font color="silver"># allow chaining: hostname is :earth</font></i>
    <b><u><font color="#000000">when</font></u></b> :is
      <b><font color="#000000">@left</font></b> == args.first
    <b><u><font color="#000000">else</font></u></b>
      <b><u><font color="#000000">super</font></u></b>
    <b><u><font color="#000000">end</font></u></b>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>

HostCondition.new.hostname.is(:earth)
</pre>
<br />
<span>RCM&#39;s real code is more sophisticated, but the idea is the same: Ruby happily calls <span class='inlinecode'>method_missing</span> for unknown methods like <span class='inlinecode'>hostname</span> and <span class='inlinecode'>is</span>, and the DSL turns those calls into a value (<span class='inlinecode'>true</span>/<span class='inlinecode'>false</span>) that decides whether the rest of the configuration should run.</span><br />
<br />
<h2 style='display: inline' id='ruby-metaprogramming-further-reading'>Ruby metaprogramming: further reading</h2><br />
<br />
<span>If you want to dive deeper into the ideas behind RCM&#39;s DSL, these books are great starting points:</span><br />
<br />
<ul>
<li>"Metaprogramming Ruby 2" by Paolo Perrotta</li>
<li>"The Well-Grounded Rubyist" by David A. Black (and others)</li>
<li>"Eloquent Ruby" by Russ Olsen</li>
</ul><br />
<span>They all cover Ruby&#39;s object model, blocks, <span class='inlinecode'>method_missing</span>, and other metaprogramming techniques in much more detail than I can in a single blog post.</span><br />
<br />
<h2 style='display: inline' id='safety-dry-runs-and-debugging'>Safety, dry runs, and debugging</h2><br />
<br />
<span>RCM has a <span class='inlinecode'>--dry</span> mode: it logs what it would do without actually touching the file system. I use this when iterating on new configurations or refactoring existing ones. Combined with the built‑in logging and debug output, it&#39;s straightforward to see which resources were scheduled and in which order.</span><br />
<br />
<span>Because RCM is just Ruby, there&#39;s no separate agent protocol or daemon. The same process parses the DSL, resolves dependencies, and performs the actions. If something goes wrong, you can drop into the code, add a quick debug statement, and re‑run your configuration.</span><br />
<br />
<h2 style='display: inline' id='rcm-vs-puppet-and-other-big-tools'>RCM vs Puppet and other big tools</h2><br />
<br />
<span>RCM does not try to compete with Puppet, Chef, or Ansible on scale. Those tools shine when you manage hundreds or thousands of machines, have multiple teams contributing modules, and need centralised orchestration, reporting, and role‑based access control. They also come with their own DSLs, servers/agents, certificate handling, and a long list of resource types and modules. Ansible may be more similar to RCM than the other tools, but it&#39;s still much more complex than RCM.</span><br />
<br />
<span>For my personal use cases, that layer is mostly overhead. I want:</span><br />
<br />
<ul>
<li>No extra daemon, message bus, or master node.</li>
<li>No separate DSL to learn besides Ruby itself.</li>
<li>A codebase small enough that I can understand and change all of it in an evening.</li>
<li>Behaviour I can inspect just by reading the Ruby code.</li>
</ul><br />
<span>In that space RCM wins: it is small, transparent, and tuned for one person (me!) with a handful of personal machines or my Laptops. I still think tools like Puppet are the right choice for larger organisations and shared infrastructure, but RCM gives me a tiny, focused alternative for my own systems.</span><br />
<br />
<h2 style='display: inline' id='cutting-rcm-010'>Cutting RCM 0.1.0</h2><br />
<br />
<span>As of this post I&#39;m tagging and releasing **RCM 0.1.0**. About 99% of the code has been written by me so far, and before AI agents take over more of the boilerplate and wiring work, it felt like a good moment to cut a release and mark this mostly‑human baseline.</span><br />
<br />
<span>Future changes will very likely involve more automated help, but 0.1.0 is the snapshot of the original, hand‑crafted version of the tool.</span><br />
<br />
<h2 style='display: inline' id='what-s-next'>What&#39;s next</h2><br />
<br />
<span>RCM already does what I need on my machines, but there are a few ideas I want to explore:</span><br />
<br />
<ul>
<li>More resource types (for example, services and users) while keeping the core small.</li>
<li>Additional package backends beyond Fedora/DNF (in particular MacOS brew).</li>
<li>Managing hosts remotely.</li>
<li>A slightly more structured way to organise larger configurations without losing the KISS spirit.</li>
</ul><br />
<h2 style='display: inline' id='feature-overview-for-now'>Feature overview (for now)</h2><br />
<br />
<span>Here is a quick overview of what RCM can do today, grouped by area:</span><br />
<br />
<ul>
<li>File management: <span class='inlinecode'>file &#39;/path&#39;</span>, <span class='inlinecode'>manage directory</span>, <span class='inlinecode'>from template</span>, <span class='inlinecode'>line &#39;...&#39;</span></li>
<li>Packages: <span class='inlinecode'>package &#39;name&#39;</span> resources for installing and updating packages (currently focused on Fedora/DNF)</li>
<li>Conditions and flow: <span class='inlinecode'>given { ... }</span> blocks, predicates such as <span class='inlinecode'>hostname is :earth</span></li>
<li>Notifications and dependencies: <span class='inlinecode'>requires</span> between resources, <span class='inlinecode'>notify</span> for follow‑up actions</li>
<li>Safety and execution modes: backups in <span class='inlinecode'>.rcmbackup/</span>, <span class='inlinecode'>--dry</span> runs, debug logging</li>
</ul><br />
<span>Some small examples adapted from RCM&#39;s own tests:</span><br />
<br />
<h3 style='display: inline' id='template-rendering-into-a-file'>Template rendering into a file</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  file <font color="#808080">'./.file_example.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    from template
    <font color="#808080">'One plus two is &lt;%= 1 + 2 %&gt;!'</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='ensuring-a-line-is-absent-from-a-file'>Ensuring a line is absent from a file</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  file <font color="#808080">'./.file_example.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    line <font color="#808080">'Whats up?'</font>
    is absent
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='guarding-a-configuration-run-on-the-current-hostname'>Guarding a configuration run on the current hostname</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  given { hostname Socket.gethostname }
  ...
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='creating-and-deleting-directories-and-purging-a-directory-tree'>Creating and deleting directories, and purging a directory tree</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  directory <font color="#808080">'./.directory_example.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    is present
  <b><u><font color="#000000">end</font></u></b>

  directory delete <b><u><font color="#000000">do</font></u></b>
    path <font color="#808080">'./.directory_example.rcmtmp'</font>
    is absent
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='managing-file-and-directory-modes-and-ownership'>Managing file and directory modes and ownership</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  touch <font color="#808080">'./.mode_example.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    mode 0o600
  <b><u><font color="#000000">end</font></u></b>

  directory <font color="#808080">'./.mode_example_dir.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    mode 0o705
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='using-a-chained-more-natural-language-style-for-notifications'>Using a chained, more natural language style for notifications</h3><br />
<br />
<span>This will just print out something, not changing anything:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  notify hello dear world <b><u><font color="#000000">do</font></u></b>
    thank you to be part of you
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='touching-files-and-updating-their-timestamps'>Touching files and updating their timestamps</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  touch <font color="#808080">'./.touch_example.rcmtmp'</font>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='expressing-dependencies-between-notifications'>Expressing dependencies between notifications</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  notify foo <b><u><font color="#000000">do</font></u></b>
    requires notify bar <b><u><font color="#000000">and</font></u></b> requires notify baz
    <font color="#808080">'foo_message'</font>
  <b><u><font color="#000000">end</font></u></b>

  notify bar

  notify baz <b><u><font color="#000000">do</font></u></b>
    requires notify bar
    <font color="#808080">'baz_message'</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='creating-and-updating-symbolic-links'>Creating and updating symbolic links</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  symlink <font color="#808080">'./.symlink_example.rcmtmp'</font> <b><u><font color="#000000">do</font></u></b>
    manage directory
    <font color="#808080">'./.symlink_target_example.rcmtmp'</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<h3 style='display: inline' id='detecting-duplicate-resource-definitions-at-configure-time'>Detecting duplicate resource definitions at configure time</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>configure <b><u><font color="#000000">do</font></u></b>
  notify :foo
  notify :foo <i><font color="silver"># raises RCM::DSL::DuplicateResource</font></i>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>If you find RCM interesting, feel free to browse the code, adapt it to your own setup, or just steal ideas for your own Ruby DSLs. I will probably extend it with more features over time as my own needs evolve.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2026-03-02-rcm-ruby-configuration-management-dsl.html'>2026-03-02 RCM: The Ruby Configuration Management DSL (You are currently reading this)</a><br />
<a class='textlink' href='./2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html'>2025-10-11 Key Takeaways from The Well-Grounded Rubyist</a><br />
<a class='textlink' href='./2021-07-04-the-well-grounded-rubyist.html'>2021-07-04 The Well-Grounded Rubyist</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Site Reliability Engineering - Part 5: System Design, Incidents, and Learning</title>
        <link href="https://foo.zone/gemfeed/2026-03-01-site-reliability-engineering-part-5.html" />
        <id>https://foo.zone/gemfeed/2026-03-01-site-reliability-engineering-part-5.html</id>
        <updated>2026-03-01T12:00:00+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Welcome to Part 5 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I'm here to share what SRE is all about in this blog series.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='site-reliability-engineering---part-5-system-design-incidents-and-learning'>Site Reliability Engineering - Part 5: System Design, Incidents, and Learning</h1><br />
<br />
<span class='quote'>Published at 2026-03-01T12:00:00+02:00</span><br />
<br />
<span>Welcome to Part 5 of my Site Reliability Engineering (SRE) series. I&#39;m currently working as a Site Reliability Engineer, and I&#39;m here to share what SRE is all about in this blog series.</span><br />
<br />
<a class='textlink' href='./2023-08-18-site-reliability-engineering-part-1.html'>2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture</a><br />
<a class='textlink' href='./2023-11-19-site-reliability-engineering-part-2.html'>2023-11-19 Site Reliability Engineering - Part 2: Operational Balance</a><br />
<a class='textlink' href='./2024-01-09-site-reliability-engineering-part-3.html'>2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture</a><br />
<a class='textlink' href='./2024-09-07-site-reliability-engineering-part-4.html'>2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers</a><br />
<a class='textlink' href='./2026-03-01-site-reliability-engineering-part-5.html'>2026-03-01 Site Reliability Engineering - Part 5: System Design, Incidents, and Learning (You are currently reading this)</a><br />
<br />
<pre>
    ___
   /   \     resilience
  |  o  |  &lt;----------  learning
   \___/
</pre>
<br />
<span>This time I want to share some themes that build on what we&#39;ve already covered: how system design and incident analysis fit together, why observability should not be an afterthought, and how a design‑improvement loop keeps systems getting better. Let&#39;s dive in!</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#site-reliability-engineering---part-5-system-design-incidents-and-learning'>Site Reliability Engineering - Part 5: System Design, Incidents, and Learning</a></li>
<li>⇢ <a href='#system-design-and-incident-analysis'>System Design and Incident Analysis</a></li>
<li>⇢ ⇢ <a href='#resilience-and-cascading-failures'>Resilience and cascading failures</a></li>
<li>⇢ ⇢ <a href='#learning-from-incidents'>Learning from incidents</a></li>
<li>⇢ <a href='#observability-don-t-leave-it-for-when-it-s-too-late'>Observability: Don&#39;t leave it for when it&#39;s too late</a></li>
<li>⇢ <a href='#the-iterative-spirit'>The iterative spirit</a></li>
<li>⇢ <a href='#book-tips'>Book tips</a></li>
</ul><br />
<h2 style='display: inline' id='system-design-and-incident-analysis'>System Design and Incident Analysis</h2><br />
<br />
<span>In my experience, a big chunk of SRE work revolves around system design and incident analysis. The thing that really matters is whether your system can contain cascading failures—because if it can&#39;t, one bad component can take everything down.</span><br />
<br />
<h3 style='display: inline' id='resilience-and-cascading-failures'>Resilience and cascading failures</h3><br />
<br />
<span>What I&#39;ve seen work well is thinking about resilience early—at design time, not after the first outage. You look for the weak points, address them before production, and try to keep the blast radius small when (not if) something fails.</span><br />
<br />
<h3 style='display: inline' id='learning-from-incidents'>Learning from incidents</h3><br />
<br />
<span>When incidents do happen, their analysis is a goldmine. Every incident exposes gaps—whether in tooling (ops tools that aren&#39;t up to the job) or in skills (engineers missing critical know-how). Blaming "human error" doesn&#39;t help. The job is to dig into root causes and fix the system. Postmortems that focus on customer impact help us distil lessons and make the system more robust so we&#39;re less likely to repeat the same failure.</span><br />
<br />
<span>System design and incident analysis form a feedback loop: we improve the design based on what we learn from incidents, and a better design reduces the impact of the next one.</span><br />
<br />
<h2 style='display: inline' id='observability-don-t-leave-it-for-when-it-s-too-late'>Observability: Don&#39;t leave it for when it&#39;s too late</h2><br />
<br />
<span>Here&#39;s something I&#39;ve seen over and over: teams agree that "we need better observability" when they&#39;re already in the middle of an incident—and by then it&#39;s too late. Observability is always an afterthought compared to product features. But you really need it in place before things go wrong. Tools that can query high-cardinality data and give you granular insight into what&#39;s happening—that&#39;s what saves you when chaos hits. So invest in it early. Trust me on this one.</span><br />
<br />
<h2 style='display: inline' id='the-iterative-spirit'>The iterative spirit</h2><br />
<br />
<span>We also accept that system design is never "done." We refine it based on real-world performance, incident learnings, and changing needs. Every incident is a chance to learn and improve; the emphasis is on learning, not blame. SREs work with developers, backend teams, and incident response so that the whole system keeps getting better. It&#39;s never perfect, but that&#39;s kind of the point.</span><br />
<br />
<h2 style='display: inline' id='book-tips'>Book tips</h2><br />
<br />
<span>If you want to go deeper, here are a few books I can recommend:</span><br />
<br />
<ul>
<li>97 Things Every SRE Should Know: Collective Wisdom from the Experts by Emily Stolarsky and Jaime Woo</li>
<li>Site Reliability Engineering: How Google Runs Production Systems by Jennifer Petoff, Niall Murphy, Betsy Beyer, and Chris Jones</li>
<li>Implementing Service Level Objectives by Alex Hidalgo</li>
</ul><br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Loadbars 0.13.0 released</title>
        <link href="https://foo.zone/gemfeed/2026-03-01-loadbars-0.13.0-released.html" />
        <id>https://foo.zone/gemfeed/2026-03-01-loadbars-0.13.0-released.html</id>
        <updated>2026-03-01T00:00:00+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Loadbars is a real-time server load monitoring tool. It connects to one or more Linux hosts via SSH and shows CPU, memory, network, load average, and disk I/O as vertical colored bars in an SDL window. You can run it locally or point it at your servers and see what's happening right now — like `top` or `vmstat`, but visual and across multiple hosts at once.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='loadbars-0130-released'>Loadbars 0.13.0 released</h1><br />
<br />
<span class='quote'>Published at 2026-03-01T00:00:00+02:00</span><br />
<br />
<span>Loadbars is a real-time server load monitoring tool. It connects to one or more Linux hosts via SSH and shows CPU, memory, network, load average, and disk I/O as vertical colored bars in an SDL window. You can run it locally or point it at your servers and see what&#39;s happening right now — like <span class='inlinecode'>top</span> or <span class='inlinecode'>vmstat</span>, but visual and across multiple hosts at once.</span><br />
<br />
<a href='./loadbars-0.13.0-released/loadbars.gif'><img alt='Loadbars in action' title='Loadbars in action' src='./loadbars-0.13.0-released/loadbars.gif' /></a><br />
<br />
<span>Loadbars can connect to hundreds of servers in parallel; the GIF above doesn&#39;t do it justice — at scale you get a wall of bars that makes it easy to spot outliers and compare hosts at a glance.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/loadbars'>Loadbars on Codeberg</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#loadbars-0130-released'>Loadbars 0.13.0 released</a></li>
<li>⇢ <a href='#what-loadbars-is-and-isn-t'>What Loadbars is (and isn&#39;t)</a></li>
<li>⇢ <a href='#use-cases'>Use cases</a></li>
<li>⇢ <a href='#what-s-new-since-the-perl-version'>What&#39;s new since the Perl version</a></li>
<li>⇢ <a href='#core-features'>Core features</a></li>
<li>⇢ ⇢ <a href='#load-average-bars'>Load average bars</a></li>
<li>⇢ ⇢ <a href='#disk-io-bars'>Disk I/O bars</a></li>
<li>⇢ ⇢ <a href='#global-reference-lines-and-options'>Global reference lines and options</a></li>
<li>⇢ ⇢ <a href='#cpu-monitoring'>CPU monitoring</a></li>
<li>⇢ ⇢ <a href='#memory-and-network'>Memory and network</a></li>
<li>⇢ ⇢ <a href='#all-hotkeys'>All hotkeys</a></li>
<li>⇢ ⇢ <a href='#ssh-and-config'>SSH and config</a></li>
<li>⇢ ⇢ <a href='#building-and-platforms'>Building and platforms</a></li>
</ul><br />
<h2 style='display: inline' id='what-loadbars-is-and-isn-t'>What Loadbars is (and isn&#39;t)</h2><br />
<br />
<span>Loadbars shows the current state only. It is not a tool for collecting loads and drawing graphs for later analysis. There is no history, no recording, no database. Tools like Prometheus or Grafana require significant setup before producing results. Loadbars lets you observe the current state immediately: one binary, SSH (or local), and you&#39;re done.</span><br />
<br />
<pre>
┌─ Loadbars 0.13.0 ─────────────────────────────────────────┐
│                                                           │
│  ████  ████  ████  ██  ████  ████  ████  ██  ░░██  ░░██   │
│  ████  ████  ████  ██  ████  ████  ████  ██  ░░██  ░░██   │
│  ████  ████  ████  ██  ████  ████  ████  ██  ░░██  ░░██   │
│   CPU   cpu0  cpu1  mem  CPU   cpu0  cpu1  mem  net   net │
│  └──── host1 ────┘      └──── host2 ────┘                 │
└───────────────────────────────────────────────────────────┘
</pre>
<br />
<h2 style='display: inline' id='use-cases'>Use cases</h2><br />
<br />
<ul>
<li>Deployments and rollouts: watch CPU, memory, and network across app servers or nodes while you deploy. Spot the one that isn&#39;t coming up or is stuck under load.</li>
<li>Load testing: run your load tool against a cluster and see which hosts (or cores) are saturated, whether memory or disk I/O is the bottleneck, and how load spreads.</li>
<li>Quick health sweep: no dashboards set up yet? SSH to a handful of hosts and run Loadbars. You get an instant picture of who&#39;s busy, who&#39;s idle, and who&#39;s swapping.</li>
<li>Comparing hosts: side-by-side bars make it easy to see if one machine is hotter than the rest (e.g. after a config change or migration).</li>
<li>Local tuning: run <span class='inlinecode'>loadbars --hosts localhost</span> while you benchmark or stress a single box; the bars and load-average view help correlate activity with what you&#39;re doing.</li>
</ul><br />
<h2 style='display: inline' id='what-s-new-since-the-perl-version'>What&#39;s new since the Perl version</h2><br />
<br />
<span>The original Loadbars (Perl + SDL, ~2010–2013) had CPU, memory, network, ClusterSSH, and a config file. The Go rewrite and subsequent releases added the following. Why each one matters:</span><br />
<br />
<ul>
<li>Load average bars: the Perl version had no load average. Now you get 1/5/15-minute load per host. Useful because load average is the classic "how queued is this box" signal — you see saturation and trends at a glance without reading numbers.</li>
</ul><br />
<ul>
<li>Disk I/O bars: disk was invisible in the Perl version. You now get read/write throughput (and optionally utilization %) per host or per device. Whole-disk devices only (partitions, loop, ram, zram, and device-mapper are excluded). Useful when you need to tell "is this slow because of CPU or because of disk?" — especially with many hosts, one disk-heavy host stands out. Disk smoothing (config diskaverage, hotkeys b/x) lets you tune how much the bars are averaged.</li>
</ul><br />
<ul>
<li>Extended peak line on CPU: a 1px line shows max system+user over the last N samples. Useful to see short spikes that the stacked bar might smooth out, so you don&#39;t miss bursty load.</li>
</ul><br />
<ul>
<li>Tooltips and host highlight: hover the mouse over any bar to see a tooltip with exact values (CPU %, memory, network, load, or disk depending on bar type). The hovered host&#39;s bars are highlighted (inverted) so you can tell which host you&#39;re over. Useful when you have hundreds of bars and want to read a specific number or confirm which host a bar belongs to.</li>
</ul><br />
<ul>
<li>GuestNice in CPU bars: CPU bars now show GuestNice as a lime green segment (above Nice). One more breakdown for virtualized or container workloads.</li>
</ul><br />
<ul>
<li>Version in window title: the default SDL title is "Loadbars &lt;version&gt; (press h for help on stdout)". Override with --title when you need a custom label.</li>
</ul><br />
<ul>
<li>Global average CPU line (key g): a single red line across all hosts at the fleet-average CPU. Useful when you have hundreds of bars: you instantly see which hosts are above or below average without comparing bar heights in your head.</li>
</ul><br />
<ul>
<li>Global I/O average line (key i): same idea for iowait+IRQ. Useful to spot which hosts are waiting on I/O more than the rest — quick way to find the disk-bound or interrupt-heavy machines.</li>
</ul><br />
<ul>
<li>Host separator lines (key s): a thin red vertical line between each host&#39;s bars. Useful at scale so you don&#39;t lose track of where one host ends and the next begins when the window is full of bars.</li>
</ul><br />
<ul>
<li>Scale reset (key r): reset the auto-scale for load and disk back to the floor. Useful after a big spike so the bars don&#39;t stay compressed for the rest of the session.</li>
</ul><br />
<ul>
<li>Toggle CPU off (key 1 cycles through aggregate → per-core → off): the Perl version didn&#39;t let you turn CPU bars off. Useful when you want to focus only on memory, network, load, or disk and reduce clutter.</li>
</ul><br />
<ul>
<li>maxbarsperrow: wrap bars into multiple rows instead of one long row. Useful with many hosts so the window doesn&#39;t become impossibly wide; you get a grid and can still scan everything.</li>
</ul><br />
<ul>
<li>maxwidth: cap on window width in pixels (default 1900). Stops the window growing unbounded with many hosts; use together with maxbarsperrow for a predictable layout.</li>
</ul><br />
<ul>
<li>Startup visibility flags: --showmem, --shownet, --showload, --extended, --cpumode, --diskmode (and friends) let you start with the bars you care about already on. Useful so you don&#39;t have to press 2, 3, 4, 5 every time.</li>
</ul><br />
<ul>
<li>Window title (--title): set the SDL window title. Useful when you run several Loadbars windows (e.g. one per cluster or environment) and need to tell them apart in your taskbar or window list.</li>
</ul><br />
<ul>
<li>SSH options (--sshopts): pass extra flags to ssh (e.g. ConnectTimeout, ProxyJump). Useful on locked-down or jump-host setups so Loadbars works without changing your global SSH config for a one-off session.</li>
</ul><br />
<ul>
<li>hasagent: skip extra SSH agent checks when you know the key is already loaded. Useful to avoid startup delay or warnings when you&#39;ve already run ssh-add and are monitoring many hosts.</li>
</ul><br />
<ul>
<li>Config file covers every option: any flag from --help can be set in ~/.loadbarsrc (no leading --). Perl had a config but the Go version supports the full set. Useful for reproducible setups and sharing.</li>
</ul><br />
<ul>
<li>Positional host arguments: you can run <span class='inlinecode'>loadbars server1 server2</span> without --hosts. Convenience when you only have a few hosts.</li>
</ul><br />
<ul>
<li>macOS as client: run the Loadbars binary on a Mac and connect to Linux servers via SSH. The Perl version was Linux-only. Useful to watch production from a laptop without a Linux VM or second machine.</li>
</ul><br />
<ul>
<li>Single static binary: no Perl runtime, no SDL Perl modules, no CPAN. Useful for deployment — copy one file to a jump host or new machine and run it.</li>
</ul><br />
<ul>
<li>Unit tests: mage test (or go test). The Go version has proper tests; useful for development and catching regressions.</li>
</ul><br />
<ul>
<li>Window resize (arrow keys): resize the window with the keyboard (left/right = width, up/down = height). Useful to fit more or fewer bars on screen without touching the mouse. (The Perl version had mouse-based resize; Go uses arrow keys.)</li>
</ul><br />
<ul>
<li>Hundreds of hosts in parallel: the Go implementation connects to all hosts concurrently and keeps polling without blocking. The Perl version struggled with many hosts. Useful for large fleets; you get a real "wall of bars" instead of a subset.</li>
</ul><br />
<h2 style='display: inline' id='core-features'>Core features</h2><br />
<br />
<h3 style='display: inline' id='load-average-bars'>Load average bars</h3><br />
<br />
<span>Press <span class='inlinecode'>4</span> or <span class='inlinecode'>l</span> to toggle. Each host gets a bar: teal fill (1-min load), yellow 1px line (5-min), white 1px line (15-min). Scale: auto (floor 2.0) or fixed with <span class='inlinecode'>--loadmax N</span>. Press <span class='inlinecode'>r</span> to reset auto-scale.</span><br />
<br />
<h3 style='display: inline' id='disk-io-bars'>Disk I/O bars</h3><br />
<br />
<span>Press <span class='inlinecode'>5</span> to toggle: aggregate (all whole-disk devices per host) → per-device → off. Partitions, loop, ram, zram, and device-mapper are excluded. Purple fill from top = read, darker purple from bottom = write. Extended mode (<span class='inlinecode'>e</span>) adds a 3px disk-utilization line. Config: <span class='inlinecode'>diskmode</span>, <span class='inlinecode'>diskmax</span>, <span class='inlinecode'>diskaverage</span>. <span class='inlinecode'>b</span>/<span class='inlinecode'>x</span> change disk average samples.</span><br />
<br />
<h3 style='display: inline' id='global-reference-lines-and-options'>Global reference lines and options</h3><br />
<br />
<span><span class='inlinecode'>g</span>: global average CPU line (1px red). <span class='inlinecode'>i</span>: global I/O average line (1px pink). <span class='inlinecode'>s</span>: host separator lines (1px red). Other options: <span class='inlinecode'>--maxbarsperrow N</span>, <span class='inlinecode'>--title</span>, <span class='inlinecode'>--sshopts</span>, <span class='inlinecode'>--hasagent</span>. Hotkeys <span class='inlinecode'>m</span>/<span class='inlinecode'>n</span> mirror <span class='inlinecode'>2</span>/<span class='inlinecode'>3</span> for memory and network. Hover over a bar for a tooltip with exact values and host highlight.</span><br />
<br />
<h3 style='display: inline' id='cpu-monitoring'>CPU monitoring</h3><br />
<br />
<span>CPU usage as vertical stacked bars: System (blue), User (yellow), Nice (green), GuestNice (lime green), Idle (black), IOwait (purple), IRQ/SoftIRQ (white), Guest/Steal (red). Press <span class='inlinecode'>1</span> for aggregate vs. per-core. Press <span class='inlinecode'>e</span> for extended mode (1px peak line: max system+user over last N samples).</span><br />
<br />
<h3 style='display: inline' id='memory-and-network'>Memory and network</h3><br />
<br />
<ul>
<li><span class='inlinecode'>2</span> / <span class='inlinecode'>m</span>: memory — left half RAM (dark grey/black), right half Swap (grey/black) per host</li>
<li><span class='inlinecode'>3</span> / <span class='inlinecode'>n</span>: network — RX (top, light green) and TX (bottom) summed over non-loopback interfaces. Red bar = no non-lo interface. Use <span class='inlinecode'>--netlink</span> or <span class='inlinecode'>f</span>/<span class='inlinecode'>v</span> for link speed (utilization %). Default <span class='inlinecode'>gbit</span>.</li>
</ul><br />
<h3 style='display: inline' id='all-hotkeys'>All hotkeys</h3><br />
<br />
<pre>
Key     Action
─────   ──────────────────────────────────────────────────
1       Toggle CPU (aggregate / per-core / off)
2 / m   Toggle memory bars
3 / n   Toggle network bars
4 / l   Toggle load average bars
5       Toggle disk I/O (aggregate / per-device / off)
r       Reset load and disk auto-scale peaks
e       Toggle extended (peak line on CPU; disk util line)
g       Toggle global average CPU line
i       Toggle global I/O average line
s       Toggle host separator lines
h       Print hotkey list to stdout
q       Quit
w       Write current settings to ~/.loadbarsrc
a / y   CPU average samples up / down
d / c   Net average samples up / down
b / x   Disk average samples up / down
f / v   Link scale up / down
Arrows  Resize window
</pre>
<br />
<h3 style='display: inline' id='ssh-and-config'>SSH and config</h3><br />
<br />
<span>Connect with public key auth; hosts need bash and <span class='inlinecode'>/proc</span> (Linux). No agent needed on the remote side.</span><br />
<br />
<pre>
loadbars --hosts server1,server2,server3
loadbars --hosts root@server1,root@server2
loadbars servername{01..50}.example.com --showcores 1
loadbars --cluster production
</pre>
<br />
<span>Config: <span class='inlinecode'>~/.loadbarsrc</span> (key=value, no <span class='inlinecode'>--</span>; use <span class='inlinecode'>#</span> for comments). Any <span class='inlinecode'>--help</span> option. Press <span class='inlinecode'>w</span> to save current settings.</span><br />
<br />
<h3 style='display: inline' id='building-and-platforms'>Building and platforms</h3><br />
<br />
<span>Go 1.25+ and SDL2. Install SDL2 (e.g. <span class='inlinecode'>sudo dnf install SDL2-devel</span> on Fedora, <span class='inlinecode'>brew install sdl2</span> on macOS), then:</span><br />
<br />
<pre>
mage build
./loadbars --hosts localhost
mage install   # to ~/go/bin
mage test
</pre>
<br />
<span>Tested on Fedora Linux 43 and common distros; macOS as client to remote Linux only (no local macOS monitoring — no <span class='inlinecode'>/proc</span>).</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>My desk rack: DeskPi RackMate T0</title>
        <link href="https://foo.zone/gemfeed/2026-02-22-my-desk-rack.html" />
        <id>https://foo.zone/gemfeed/2026-02-22-my-desk-rack.html</id>
        <updated>2026-02-21T11:17:15+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>On my desk sits a small rack that keeps audio gear, power, and network in one place: the DeskPi RackMate T0. Here's what lives in it and how it's wired.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='my-desk-rack-deskpi-rackmate-t0'>My desk rack: DeskPi RackMate T0</h1><br />
<br />
<span class='quote'>Published at 2026-02-21T11:17:15+02:00</span><br />
<br />
<pre>
    ┌─────────────────┐
    │   ●  ●  AIR     │  ← air-quality monitor
    ├─────────────────┤
    │  ╔═╗  CD        │  ← CD transport
    │  ║ ◉║  S/PDIF   │
    │  ╚═╝            │
    ├─────────────────┤
    │  ▓▓▓  USB PWR   │  ← PinePower
    ├─────────────────┤
    │  ░░░  (phones)  │  ← 1U "empty" shelf
    ├─────────────────┤
    │  ◉◉◉◉◉  LAN     │  ← 5-port switch
    ├─────────────────┤
    │  [E50] [L50]    │  ← DAC + AMP
    │   DAC   AMP     │
    └─────────────────┘
         RackMate T0
</pre>
<br />
<span>On my desk sits a small rack that keeps audio gear, power, and network in one place: the DeskPi RackMate T0. Here&#39;s what lives in it and how it&#39;s wired.</span><br />
<br />
<a class='textlink' href='https://deskpi.com/products/deskpi-rackmate-t1-rackmount-10-inch-4u-server-cabinet-for-network-servers-audio-and-video-equipment'>DeskPi RackMate T0</a><br />
<br />
<a href='./my-deskrack/deskrack.jpg'><img alt='DeskPi RackMate T0 on the desk' title='DeskPi RackMate T0 on the desk' src='./my-deskrack/deskrack.jpg' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#my-desk-rack-deskpi-rackmate-t0'>My desk rack: DeskPi RackMate T0</a></li>
<li>⇢ <a href='#what-s-in-the-rack-top-to-bottom'>What&#39;s in the rack (top to bottom)</a></li>
<li>⇢ ⇢ <a href='#top-cd-transport-and-air-quality-monitor'>Top: CD transport and air-quality monitor</a></li>
<li>⇢ ⇢ <a href='#power-and-charging-pinepower-desktop--1u-shelf'>Power and charging: PinePower Desktop + 1U shelf</a></li>
<li>⇢ ⇢ <a href='#network-5-port-mini-switch'>Network: 5-port mini switch</a></li>
<li>⇢ ⇢ <a href='#bottom-dac-and-headphone-amp'>Bottom: DAC and headphone amp</a></li>
<li>⇢ ⇢ <a href='#music-sources'>Music sources</a></li>
<li>⇢ ⇢ <a href='#left-side-cable-management'>Left side: cable management</a></li>
<li>⇢ <a href='#next-to-the-rack'>Next to the rack</a></li>
<li>⇢ <a href='#bedside-another-hifi-setup'>Bedside: another HiFi setup</a></li>
</ul><br />
<h2 style='display: inline' id='what-s-in-the-rack-top-to-bottom'>What&#39;s in the rack (top to bottom)</h2><br />
<br />
<h3 style='display: inline' id='top-cd-transport-and-air-quality-monitor'>Top: CD transport and air-quality monitor</h3><br />
<br />
<span>At the top is the S.M.S.L PL200T, a CD transport with anti-vibration design. It outputs digital audio over coaxial S/PDIF into the DAC in the rack. On top of the transport sits a small air-quality monitor so I can keep an eye on the room.</span><br />
<br />
<a class='textlink' href='https://www.smsl-audio.com/portal/product/detail/id/908.html'>S.M.S.L PL200T CD Transport</a><br />
<br />
<a href='./my-deskrack/deskrack-cdtransport.jpg'><img alt='CD transport and air-quality monitor on top' title='CD transport and air-quality monitor on top' src='./my-deskrack/deskrack-cdtransport.jpg' /></a><br />
<br />
<span>A CD transport is not the same as a CD player. A CD player has a built-in DAC (digital-to-analog converter) and outputs analogue audio—you plug it into an amp or active speakers and you&#39;re done. A CD transport only reads the disc and outputs a digital signal (e.g. coaxial or optical S/PDIF). It has no DAC. You feed that digital stream into an external DAC, which then does the conversion. The idea is to separate the mechanical part (spinning the disc, reading the pits) from the conversion stage, so you can use one DAC for CDs, streaming, and other sources, and upgrade or swap the transport and the DAC independently.</span><br />
<br />
<span>In the age of streaming and files, putting on a real CD is still a pleasure. You own the disc and the sound isn&#39;t at the mercy of a subscription or a server. You pick an album, put it in, and listen from start to finish—no endless scrolling, no algorithm. The format is fixed (16-bit/44.1 kHz), so what you hear is consistent and often better than heavily compressed streams. And there&#39;s something satisfying about the ritual: handling the case, the disc, and the artwork instead of tapping a screen.</span><br />
<br />
<h3 style='display: inline' id='power-and-charging-pinepower-desktop--1u-shelf'>Power and charging: PinePower Desktop + 1U shelf</h3><br />
<br />
<span>Below that is the PinePower Desktop from Pine64, used as a desktop power and USB charging station for phones and other devices. The rack has one free 1U space under the PinePower where I put the devices that are charging, so cables and gadgets stay in one spot.</span><br />
<br />
<a class='textlink' href='https://www.pine64.org'>PinePower Desktop (Pine64)</a><br />
<br />
<h3 style='display: inline' id='network-5-port-mini-switch'>Network: 5-port mini switch</h3><br />
<br />
<span>Next is a compact 5-port Ethernet switch. The uplink goes to a wall socket behind the desk; the other ports feed the computer, laptop, and anything else that needs wired LAN on the desk. Next to the switch you can see my Nothing ear buds.</span><br />
<br />
<a class='textlink' href='https://nothing.tech/products/ear'>Nothing ear buds</a><br />
<br />
<h3 style='display: inline' id='bottom-dac-and-headphone-amp'>Bottom: DAC and headphone amp</h3><br />
<br />
<span>At the bottom of the rack are the Topping E50 (DAC) and Topping L50 (headphone amplifier). The E50 converts digital to analogue; the L50 drives the headphones. They drive my Hifiman Sundara headphones.</span><br />
<br />
<a class='textlink' href='https://www.tpdz.net'>Topping E50 DAC</a><br />
<a class='textlink' href='https://www.tpdz.net'>Topping L50 Headphone Amplifier</a><br />
<a class='textlink' href='https://hifiman.com/products/detail/sundara'>Hifiman Sundara</a><br />
<br />
<h3 style='display: inline' id='music-sources'>Music sources</h3><br />
<br />
<ul>
<li>CD transport: coaxial (S/PDIF) from the S.M.S.L PL200T into the Topping E50.</li>
<li>Streaming: USB from the desktop computer and/or laptop on the desk into the E50, so I can play from either machine.</li>
</ul><br />
<h3 style='display: inline' id='left-side-cable-management'>Left side: cable management</h3><br />
<br />
<span>On the left of the rack are two cable holders to keep power and signal cables tidy.</span><br />
<br />
<h2 style='display: inline' id='next-to-the-rack'>Next to the rack</h2><br />
<br />
<span>Right beside the rack is my Supernote Nomad, which I use for notes and reading and have written about elsewhere on this blog. It’s the small tablet-shaped device on the right side of the rack.</span><br />
<br />
<a href='./my-deskrack/deskrack-supernote.jpg'><img alt='Supernote Nomad (small tablet on the right of the rack)' title='Supernote Nomad (small tablet on the right of the rack)' src='./my-deskrack/deskrack-supernote.jpg' /></a><br />
<a class='textlink' href='https://supernote.com/pages/supernote-nomad'>Supernote Nomad (product page)</a><br />
<br />
<a href='./my-deskrack/deskrack-frontview.jpg'><img alt='Front view of the rack' title='Front view of the rack' src='./my-deskrack/deskrack-frontview.jpg' /></a><br />
<a href='./my-deskrack/deskrack-backside.jpg'><img alt='Back of the rack' title='Back of the rack' src='./my-deskrack/deskrack-backside.jpg' /></a><br />
<br />
<h2 style='display: inline' id='bedside-another-hifi-setup'>Bedside: another HiFi setup</h2><br />
<br />
<span>I have a second setup for high-res listening next to my bed. On the nightstand sit my FiiO K13 R2R (an R2R DAC/amp) and my Denon AH-D9200 headphones. I connect the K13 to my laptop via USB and use it for high-resolution files and streaming when I&#39;m not at the desk.</span><br />
<br />
<a class='textlink' href='https://www.fiio.com'>Fiio K13 R2R</a><br />
<a class='textlink' href='https://www.denon.com'>Denon AH-D9200</a><br />
<br />
<span>That&#39;s the full desk rack: CD transport and air monitor on top, PinePower and charging shelf, switch, then Topping E50 and L50 at the bottom, with the Hifiman Sundara as the main output and the Supernote Nomad sitting next to it. I hope that you found this interesting.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>A tmux popup editor for Cursor Agent CLI prompts</title>
        <link href="https://foo.zone/gemfeed/2026-02-02-tmux-popup-editor-for-cursor-agent-prompts.html" />
        <id>https://foo.zone/gemfeed/2026-02-02-tmux-popup-editor-for-cursor-agent-prompts.html</id>
        <updated>2026-02-01T20:24:16+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I spend some time in Cursor Agent (the CLI version of the Cursor IDE, I don't like really the IDE), and I also jump between Claude Code CLI, Ampcode, Gemini CLI, OpenAI Codex CLI, OpenCode, and Aider just to see how things are evolving. But for the next month I'll be with Cursor Agent.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='a-tmux-popup-editor-for-cursor-agent-cli-prompts'>A tmux popup editor for Cursor Agent CLI prompts</h1><br />
<br />
<span class='quote'>Published at 2026-02-01T20:24:16+02:00</span><br />
<br />
<span>...and any other TUI based application</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#a-tmux-popup-editor-for-cursor-agent-cli-prompts'>A tmux popup editor for Cursor Agent CLI prompts</a></li>
<li>⇢ <a href='#why-i-built-this'>Why I built this</a></li>
<li>⇢ <a href='#what-it-is'>What it is</a></li>
<li>⇢ <a href='#how-it-works-overview'>How it works (overview)</a></li>
<li>⇢ ⇢ <a href='#workflow-diagram'>Workflow diagram</a></li>
<li>⇢ <a href='#challenges-and-small-discoveries'>Challenges and small discoveries</a></li>
<li>⇢ <a href='#test-cases-for-a-future-rewrite'>Test cases (for a future rewrite)</a></li>
<li>⇢ <a href='#almost-works-with-any-editor-or-any-tui'>(Almost) works with any editor (or any TUI)</a></li>
</ul><br />
<h2 style='display: inline' id='why-i-built-this'>Why I built this</h2><br />
<br />
<span>I spend some time in Cursor Agent (the CLI version of the Cursor IDE, I don&#39;t like really the IDE), and I also jump between Claude Code CLI, Ampcode, Gemini CLI, OpenAI Codex CLI, OpenCode, and Aider just to see how things are evolving. But for the next month I&#39;ll be with Cursor Agent.</span><br />
<br />
<a class='textlink' href='https://cursor.com/cli'>https://cursor.com/cli</a><br />
<br />
<span>Short prompts are fine in the inline input, but for longer prompts I want a real editor: spellcheck, search/replace, multiple cursors, and all the Helix muscle memory I already have.</span><br />
<br />
<span>Cursor Agent has a Vim editing mode, but not Helix. And even in Vim mode I can&#39;t use my full editor setup. I want the real thing, not a partial emulation.</span><br />
<br />
<a class='textlink' href='https://helix-editor.com'>https://helix-editor.com</a><br />
<a class='textlink' href='https://www.vim.org'>https://www.vim.org</a><br />
<a class='textlink' href='https://neovim.io'>https://neovim.io</a><br />
<br />
<span>So I built a tiny tmux popup editor. It opens <span class='inlinecode'>$EDITOR</span> (Helix for me), and when I close it, the buffer is sent back into the prompt. It sounds simple, but it feels surprisingly native.</span><br />
<br />
<span>This is how it looks like:</span><br />
<br />
<a href='./tmux-popup-editor-for-cursor-agent-prompts/demo1.png'><img alt='Popup editor in action' title='Popup editor in action' src='./tmux-popup-editor-for-cursor-agent-prompts/demo1.png' /></a><br />
<br />
<h2 style='display: inline' id='what-it-is'>What it is</h2><br />
<br />
<span>The idea is straightforward:</span><br />
<br />
<ul>
<li>A tmux key binding <span class='inlinecode'>prefix-e</span> opens a popup overlay near the bottom of the screen.</li>
<li>The popup starts <span class='inlinecode'>$EDITOR</span> on a temp file.</li>
<li>When I exit the editor, the script sends the contents back to the original pane with <span class='inlinecode'>tmux send-keys</span>.</li>
</ul><br />
<span>It also pre-fills the temp file with whatever is already typed after Cursor Agent&#39;s <span class='inlinecode'>→</span> prompt, so I can continue where I left off.</span><br />
<br />
<h2 style='display: inline' id='how-it-works-overview'>How it works (overview)</h2><br />
<br />
<span>This is the tmux binding I use (trimmed to the essentials):</span><br />
<br />
<pre>
bind-key e run-shell -b "tmux display-message -p &#39;#{pane_id}&#39;
  &gt; /tmp/tmux-edit-target-#{client_pid} \;
  tmux popup -E -w 90% -h 35% -x 5% -y 65% -d &#39;#{pane_current_path}&#39;
  \"~/scripts/tmux-edit-send /tmp/tmux-edit-target-#{client_pid}\""
</pre>
<br />
<h3 style='display: inline' id='workflow-diagram'>Workflow diagram</h3><br />
<br />
<span>This is the whole workflow:</span><br />
<br />
<pre>
┌────────────────────┐   ┌───────────────┐   ┌─────────────────────┐   ┌─────────────────────┐
│ Cursor input box   │--&gt;| tmux keybind  │--&gt;| popup runs script   │--&gt;| capture + prefill   │
│ (prompt pane)      │   │ prefix + e    │   │ tmux-edit-send      │   │ temp file           │
└────────────────────┘   └───────────────┘   └─────────────────────┘   └─────────────────────┘
                                                                                 |
                                                                                 v
┌────────────────────┐   ┌────────────────────┐   ┌────────────────────┐   ┌────────────────────┐
│ Cursor input box   │&lt;--| send-keys back     |&lt;--| close editor+popup |&lt;--| edit temp file     |
│ (prompt pane)      │   │ to original pane   │   │ (exit $EDITOR)     │   │ in $EDITOR         │
└────────────────────┘   └────────────────────┘   └────────────────────┘   └────────────────────┘
</pre>
<br />
<span>And this is how it looks like after sending back the text to the Cursor Agent&#39;s input:</span><br />
<br />
<a href='./tmux-popup-editor-for-cursor-agent-prompts/demo2.png'><img alt='Prefilled prompt text' title='Prefilled prompt text' src='./tmux-popup-editor-for-cursor-agent-prompts/demo2.png' /></a><br />
<br />
<span>And here is the full script. It is a bit ugly since it&#39;s shell (written with Cursor Agent with GPT-5.2-Codex), and I might (let) rewrite it in Go with proper unit tests, config-file, multi-agent support and release it once I have time. But it works well enough for now.</span><br />
<br />
<span class='quote'>Update 2026-02-08: This functionality has been integrated into the hexai project (https://codeberg.org/snonux/hexai) with proper multi-agent support for Cursor Agent, Claude Code CLI, and Ampcode. The hexai version includes unit tests, configuration files, and better agent detection. While still experimental, it&#39;s more robust than this shell script. See the hexai-tmux-edit command for details.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/hexai'>https://codeberg.org/snonux/hexai</a><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver">#!/usr/bin/env bash</font></i>
<b><u><font color="#000000">set</font></u></b> -u -o pipefail

LOG_ENABLED=<font color="#000000">0</font>
log_file=<font color="#808080">"${TMPDIR:-/tmp}/tmux-edit-send.log"</font>
log() {
  <b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$LOG_ENABLED"</font> -eq <font color="#000000">1</font> ]; <b><u><font color="#000000">then</font></u></b>
    <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s</font>\n<font color="#808080">'</font> <font color="#808080">"$*"</font> &gt;&gt; <font color="#808080">"$log_file"</font>
  <b><u><font color="#000000">fi</font></u></b>
}

<i><font color="silver"># Read the target pane id from a temp file created by tmux binding.</font></i>
read_target_from_file() {
  <b><u><font color="#000000">local</font></u></b> file_path=<font color="#808080">"$1"</font>
  <b><u><font color="#000000">local</font></u></b> pane_id
  <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$file_path"</font> ] &amp;&amp; [ -f <font color="#808080">"$file_path"</font> ]; <b><u><font color="#000000">then</font></u></b>
    pane_id=<font color="#808080">"$(sed -n '1p' "</font>$file_path<font color="#808080">" | tr -d '[:space:]')"</font>
    <i><font color="silver"># Ensure pane ID has % prefix</font></i>
    <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$pane_id"</font> ] &amp;&amp; [[ <font color="#808080">"$pane_id"</font> != %* ]]; <b><u><font color="#000000">then</font></u></b>
      pane_id=<font color="#808080">"%${pane_id}"</font>
    <b><u><font color="#000000">fi</font></u></b>
    <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s'</font> <font color="#808080">"$pane_id"</font>
  <b><u><font color="#000000">fi</font></u></b>
}

<i><font color="silver"># Read the target pane id from tmux environment if present.</font></i>
read_target_from_env() {
  <b><u><font color="#000000">local</font></u></b> env_line pane_id
  env_line=<font color="#808080">"$(tmux show-environment -g TMUX_EDIT_TARGET 2&gt;/dev/null || true)"</font>
  <b><u><font color="#000000">case</font></u></b> <font color="#808080">"$env_line"</font> <b><u><font color="#000000">in</font></u></b>
    TMUX_EDIT_TARGET=*)
      pane_id=<font color="#808080">"${env_line#TMUX_EDIT_TARGET=}"</font>
      <i><font color="silver"># Ensure pane ID has % prefix</font></i>
      <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$pane_id"</font> ] &amp;&amp; [[ <font color="#808080">"$pane_id"</font> != %* ]] &amp;&amp; [[ <font color="#808080">"$pane_id"</font> =~ ^[<font color="#000000">0</font>-<font color="#000000">9</font>]+$ ]]; <b><u><font color="#000000">then</font></u></b>
        pane_id=<font color="#808080">"%${pane_id}"</font>
      <b><u><font color="#000000">fi</font></u></b>
      <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s'</font> <font color="#808080">"$pane_id"</font>
      ;;
  <b><u><font color="#000000">esac</font></u></b>
}

<i><font color="silver"># Resolve the target pane id, falling back to the last pane.</font></i>
resolve_target_pane() {
  <b><u><font color="#000000">local</font></u></b> candidate=<font color="#808080">"$1"</font>
  <b><u><font color="#000000">local</font></u></b> current_pane last_pane

  current_pane=<font color="#808080">"$(tmux display-message -p "</font><i><font color="silver">#{pane_id}" 2&gt;/dev/null || true)"</font></i>
  log <font color="#808080">"current pane=${current_pane:-&lt;empty&gt;}"</font>
  
  <i><font color="silver"># Ensure candidate has % prefix if it's a pane ID</font></i>
  <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$candidate"</font> ] &amp;&amp; [[ <font color="#808080">"$candidate"</font> =~ ^[<font color="#000000">0</font>-<font color="#000000">9</font>]+$ ]]; <b><u><font color="#000000">then</font></u></b>
    candidate=<font color="#808080">"%${candidate}"</font>
    log <font color="#808080">"normalized candidate to $candidate"</font>
  <b><u><font color="#000000">fi</font></u></b>
  
  <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$candidate"</font> ] &amp;&amp; [[ <font color="#808080">"$candidate"</font> == *<font color="#808080">"#{"</font>* ]]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"format target detected, clearing"</font>
    candidate=<font color="#808080">""</font>
  <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$candidate"</font> ]; <b><u><font color="#000000">then</font></u></b>
    candidate=<font color="#808080">"$(tmux display-message -p "</font><i><font color="silver">#{last_pane}" 2&gt;/dev/null || true)"</font></i>
    log <font color="#808080">"using last pane as fallback: $candidate"</font>
  <b><u><font color="#000000">elif</font></u></b> [ <font color="#808080">"$candidate"</font> = <font color="#808080">"$current_pane"</font> ]; <b><u><font color="#000000">then</font></u></b>
    last_pane=<font color="#808080">"$(tmux display-message -p "</font><i><font color="silver">#{last_pane}" 2&gt;/dev/null || true)"</font></i>
    <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$last_pane"</font> ]; <b><u><font color="#000000">then</font></u></b>
      candidate=<font color="#808080">"$last_pane"</font>
      log <font color="#808080">"candidate was current, using last pane: $candidate"</font>
    <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s'</font> <font color="#808080">"$candidate"</font>
}

<i><font color="silver"># Capture the latest multi-line prompt content from the pane.</font></i>
capture_prompt_text() {
  <b><u><font color="#000000">local</font></u></b> target=<font color="#808080">"$1"</font>
  tmux capture-pane -p -t <font color="#808080">"$target"</font> -S -<font color="#000000">2000</font> <font color="#000000">2</font>&gt;/dev/null | awk <font color="#808080">'</font>
<font color="#808080">    function trim_box(line) {</font>
<font color="#808080">      sub(/^ *│ ?/, "", line)</font>
<font color="#808080">      sub(/ *│ *$/, "", line)</font>
<font color="#808080">      sub(/[[:space:]]+$/, "", line)</font>
<font color="#808080">      return line</font>
<font color="#808080">    }</font>
<font color="#808080">    /^ *│ *→/ &amp;&amp; index($0,"INSERT")==0 &amp;&amp; index($0,"Add a follow-up")==0 {</font>
<font color="#808080">      if (text != "") last = text</font>
<font color="#808080">      text = ""</font>
<font color="#808080">      capture = 1</font>
<font color="#808080">      line = $0</font>
<font color="#808080">      sub(/^.*→ ?/, "", line)</font>
<font color="#808080">      line = trim_box(line)</font>
<font color="#808080">      if (line != "") text = line</font>
<font color="#808080">      next</font>
<font color="#808080">    }</font>
<font color="#808080">    capture {</font>
<font color="#808080">      if ($0 ~ /^ *└/) {</font>
<font color="#808080">        capture = 0</font>
<font color="#808080">        if (text != "") last = text</font>
<font color="#808080">        next</font>
<font color="#808080">      }</font>
<font color="#808080">      if ($0 ~ /^ *│/ &amp;&amp; index($0,"INSERT")==0 &amp;&amp; index($0,"Add a follow-up")==0) {</font>
<font color="#808080">        line = trim_box($0)</font>
<font color="#808080">        if (line != "") {</font>
<font color="#808080">          if (text != "") text = text " " line</font>
<font color="#808080">          else text = line</font>
<font color="#808080">        }</font>
<font color="#808080">      }</font>
<font color="#808080">    }</font>
<font color="#808080">    END {</font>
<font color="#808080">      if (text != "") last = text</font>
<font color="#808080">      if (last != "") print last</font>
<font color="#808080">    }</font>
<font color="#808080">  '</font>
}

<i><font color="silver"># Write captured prompt text into the temp file if available.</font></i>
prefill_tmpfile() {
  <b><u><font color="#000000">local</font></u></b> tmpfile=<font color="#808080">"$1"</font>
  <b><u><font color="#000000">local</font></u></b> prompt_text=<font color="#808080">"$2"</font>
  <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$prompt_text"</font> ]; <b><u><font color="#000000">then</font></u></b>
    <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s</font>\n<font color="#808080">'</font> <font color="#808080">"$prompt_text"</font> &gt; <font color="#808080">"$tmpfile"</font>
  <b><u><font color="#000000">fi</font></u></b>
}

<i><font color="silver"># Ensure the target pane exists before sending keys.</font></i>
validate_target_pane() {
  <b><u><font color="#000000">local</font></u></b> target=<font color="#808080">"$1"</font>
  <b><u><font color="#000000">local</font></u></b> pane target_found
  <b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$target"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"error: no target pane determined"</font>
    echo <font color="#808080">"Could not determine target pane."</font> &gt;&amp;<font color="#000000">2</font>
    <b><u><font color="#000000">return</font></u></b> <font color="#000000">1</font>
  <b><u><font color="#000000">fi</font></u></b>
  target_found=<font color="#000000">0</font>
  log <font color="#808080">"validate: looking for target='$target' in all panes:"</font>
  <b><u><font color="#000000">for</font></u></b> pane <b><u><font color="#000000">in</font></u></b> $(tmux list-panes -a -F <font color="#808080">"#{pane_id}"</font> <font color="#000000">2</font>&gt;/dev/null || <b><u><font color="#000000">true</font></u></b>); <b><u><font color="#000000">do</font></u></b>
    log <font color="#808080">"validate: checking pane='$pane'"</font>
    <b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$pane"</font> = <font color="#808080">"$target"</font> ]; <b><u><font color="#000000">then</font></u></b>
      target_found=<font color="#000000">1</font>
      log <font color="#808080">"validate: MATCH FOUND!"</font>
      <b><u><font color="#000000">break</font></u></b>
    <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">done</font></u></b>
  <b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$target_found"</font> -ne <font color="#000000">1</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"error: target pane not found: $target"</font>
    echo <font color="#808080">"Target pane not found: $target"</font> &gt;&amp;<font color="#000000">2</font>
    <b><u><font color="#000000">return</font></u></b> <font color="#000000">1</font>
  <b><u><font color="#000000">fi</font></u></b>
  log <font color="#808080">"validate: target pane validated successfully"</font>
}

<i><font color="silver"># Send temp file contents to the target pane line by line.</font></i>
send_content() {
  <b><u><font color="#000000">local</font></u></b> target=<font color="#808080">"$1"</font>
  <b><u><font color="#000000">local</font></u></b> tmpfile=<font color="#808080">"$2"</font>
  <b><u><font color="#000000">local</font></u></b> prompt_text=<font color="#808080">"$3"</font>
  <b><u><font color="#000000">local</font></u></b> first_line=<font color="#000000">1</font>
  <b><u><font color="#000000">local</font></u></b> line
  log <font color="#808080">"send_content: target=$target, prompt_text='$prompt_text'"</font>
  <b><u><font color="#000000">while</font></u></b> IFS= <b><u><font color="#000000">read</font></u></b> -r line || [ -n <font color="#808080">"$line"</font> ]; <b><u><font color="#000000">do</font></u></b>
    log <font color="#808080">"send_content: read line='$line'"</font>
    <b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$first_line"</font> -eq <font color="#000000">1</font> ] &amp;&amp; [ -n <font color="#808080">"$prompt_text"</font> ]; <b><u><font color="#000000">then</font></u></b>
      <b><u><font color="#000000">if</font></u></b> [[ <font color="#808080">"$line"</font> == <font color="#808080">"$prompt_text"</font>* ]]; <b><u><font color="#000000">then</font></u></b>
        <b><u><font color="#000000">local</font></u></b> old_line=<font color="#808080">"$line"</font>
        line=<font color="#808080">"${line#"</font>$prompt_text<font color="#808080">"}"</font>
        log <font color="#808080">"send_content: stripped prompt, was='$old_line' now='$line'"</font>
      <b><u><font color="#000000">fi</font></u></b>
    <b><u><font color="#000000">fi</font></u></b>
    first_line=<font color="#000000">0</font>
    log <font color="#808080">"send_content: sending line='$line'"</font>
    tmux send-keys -t <font color="#808080">"$target"</font> -l <font color="#808080">"$line"</font>
    tmux send-keys -t <font color="#808080">"$target"</font> Enter
  <b><u><font color="#000000">done</font></u></b> &lt; <font color="#808080">"$tmpfile"</font>
  log <font color="#808080">"sent content to $target"</font>
}

<i><font color="silver"># Main entry point.</font></i>
main() {
  <b><u><font color="#000000">local</font></u></b> target_file=<font color="#808080">"${1:-}"</font>
  <b><u><font color="#000000">local</font></u></b> target
  <b><u><font color="#000000">local</font></u></b> editor=<font color="#808080">"${EDITOR:-vi}"</font>
  <b><u><font color="#000000">local</font></u></b> tmpfile
  <b><u><font color="#000000">local</font></u></b> prompt_text

  log <font color="#808080">"=== tmux-edit-send starting ==="</font>
  log <font color="#808080">"target_file=$target_file"</font>
  log <font color="#808080">"EDITOR=$editor"</font>
  
  target=<font color="#808080">"$(read_target_from_file "</font>$target_file<font color="#808080">" || true)"</font>
  <b><u><font color="#000000">if</font></u></b> [ -n <font color="#808080">"$target"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"file target=${target:-&lt;empty&gt;}"</font>
    rm -f <font color="#808080">"$target_file"</font>
  <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$target"</font> ]; <b><u><font color="#000000">then</font></u></b>
    target=<font color="#808080">"${TMUX_EDIT_TARGET:-}"</font>
  <b><u><font color="#000000">fi</font></u></b>
  log <font color="#808080">"env target=${target:-&lt;empty&gt;}"</font>
  <b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$target"</font> ]; <b><u><font color="#000000">then</font></u></b>
    target=<font color="#808080">"$(read_target_from_env || true)"</font>
  <b><u><font color="#000000">fi</font></u></b>
  log <font color="#808080">"tmux env target=${target:-&lt;empty&gt;}"</font>
  target=<font color="#808080">"$(resolve_target_pane "</font>$target<font color="#808080">")"</font>
  log <font color="#808080">"fallback target=${target:-&lt;empty&gt;}"</font>

  tmpfile=<font color="#808080">"$(mktemp)"</font>
  log <font color="#808080">"created tmpfile=$tmpfile"</font>
  <b><u><font color="#000000">if</font></u></b> [ ! -f <font color="#808080">"$tmpfile"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"ERROR: mktemp failed to create file"</font>
    echo <font color="#808080">"ERROR: mktemp failed"</font> &gt;&amp;<font color="#000000">2</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
  <b><u><font color="#000000">fi</font></u></b>
  mv <font color="#808080">"$tmpfile"</font> <font color="#808080">"${tmpfile}.md"</font> <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font> | <b><u><font color="#000000">while</font></u></b> <b><u><font color="#000000">read</font></u></b> -r line; <b><u><font color="#000000">do</font></u></b> log <font color="#808080">"mv output: $line"</font>; <b><u><font color="#000000">done</font></u></b>
  tmpfile=<font color="#808080">"${tmpfile}.md"</font>
  log <font color="#808080">"renamed to tmpfile=$tmpfile"</font>
  <b><u><font color="#000000">if</font></u></b> [ ! -f <font color="#808080">"$tmpfile"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"ERROR: tmpfile does not exist after rename"</font>
    echo <font color="#808080">"ERROR: tmpfile rename failed"</font> &gt;&amp;<font color="#000000">2</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
  <b><u><font color="#000000">fi</font></u></b>
  <b><u><font color="#000000">trap</font></u></b> <font color="#808080">'rm -f "$tmpfile"'</font> EXIT

  log <font color="#808080">"capturing prompt text from target=$target"</font>
  prompt_text=<font color="#808080">"$(capture_prompt_text "</font>$target<font color="#808080">")"</font>
  log <font color="#808080">"captured prompt_text='$prompt_text'"</font>
  prefill_tmpfile <font color="#808080">"$tmpfile"</font> <font color="#808080">"$prompt_text"</font>
  log <font color="#808080">"prefilled tmpfile"</font>

  log <font color="#808080">"launching editor: $editor $tmpfile"</font>
  <font color="#808080">"$editor"</font> <font color="#808080">"$tmpfile"</font>
  <b><u><font color="#000000">local</font></u></b> editor_exit=$?
  log <font color="#808080">"editor exited with status $editor_exit"</font>

  <b><u><font color="#000000">if</font></u></b> [ ! -s <font color="#808080">"$tmpfile"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log <font color="#808080">"empty file, nothing sent"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
  <b><u><font color="#000000">fi</font></u></b>
  
  log <font color="#808080">"tmpfile contents:"</font>
  log <font color="#808080">"$(cat "</font>$tmpfile<font color="#808080">")"</font>

  log <font color="#808080">"validating target pane"</font>
  validate_target_pane <font color="#808080">"$target"</font>
  log <font color="#808080">"sending content to target=$target"</font>
  send_content <font color="#808080">"$target"</font> <font color="#808080">"$tmpfile"</font> <font color="#808080">"$prompt_text"</font>
  log <font color="#808080">"=== tmux-edit-send completed ==="</font>
}

main <font color="#808080">"$@"</font>
</pre>
<br />
<h2 style='display: inline' id='challenges-and-small-discoveries'>Challenges and small discoveries</h2><br />
<br />
<span>The problems were mostly small but annoying:</span><br />
<br />
<ul>
<li>Getting the right target pane was the first hurdle. I ended up storing the pane id in a file because of tmux format expansion quirks.</li>
<li>The Cursor UI draws a nice box around the prompt, so the prompt line contains a <span class='inlinecode'>│</span> and other markers. I had to filter those out and strip the box-drawing characters.</li>
<li>When I prefilled text and then sent it back, I sometimes duplicated the prompt. Stripping the prefilled prompt text from the submitted text fixed that.</li>
</ul><br />
<h2 style='display: inline' id='test-cases-for-a-future-rewrite'>Test cases (for a future rewrite)</h2><br />
<br />
<span>These are the cases I test whenever I touch the script:</span><br />
<br />
<ul>
<li>Single-line prompt: capture everything after <span class='inlinecode'>→</span> and prefill the editor.</li>
<li>Multi-line boxed prompt: capture the wrapped lines inside the <span class='inlinecode'>│ ... │</span> box and join them with spaces (no newline in the editor).</li>
<li>Ignore UI noise: do not capture lines containing <span class='inlinecode'>INSERT</span> or <span class='inlinecode'>Add a follow-up</span>.</li>
<li>Preserve appended text: if I add <span class='inlinecode'> juju</span> to an existing line, the space before <span class='inlinecode'>juju</span> must survive.</li>
<li>No duplicate send: if the prefilled text is still at the start of the first line, it must be stripped once before sending back.</li>
</ul><br />
<h2 style='display: inline' id='almost-works-with-any-editor-or-any-tui'>(Almost) works with any editor (or any TUI)</h2><br />
<br />
<span>Although I use Helix, this is just <span class='inlinecode'>$EDITOR</span>. If you prefer Vim, Neovim, or something more exotic, it should work. The same mechanism can be used to feed text into any TUI that reads from a terminal pane, not just Cursor Agent.</span><br />
<br />
<span>One caveat: different agents draw different prompt UIs, so the capture logic depends on the prompt shape. A future version of this script should be more modular in that respect; for now this is just a PoC tailored to Cursor Agent.</span><br />
<br />
<span>Another thing is, what if Cursor decides to change the design of its TUI? I would need to change my script as well.</span><br />
<br />
<span>If I get a chance, I&#39;ll clean it up and rewrite it in Go (and release it properly or include it into Hexai, another AI related tool of mine, of which I haven&#39;t blogged about yet). For now, I am happy with this little hack. It already feels like a native editing workflow for Cursor Agent prompts.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/hexai'>https://codeberg.org/snonux/hexai</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2026-02-02-tmux-popup-editor-for-cursor-agent-prompts.html'>2026-02-02 A tmux popup editor for Cursor Agent CLI prompts (You are currently reading this)</a><br />
<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS</a><br />
<a class='textlink' href='./2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html'>2025-05-02 Terminal multiplexing with <span class='inlinecode'>tmux</span> - Fish edition</a><br />
<a class='textlink' href='./2024-06-23-terminal-multiplexing-with-tmux.html'>2024-06-23 Terminal multiplexing with <span class='inlinecode'>tmux</span> - Z-Shell edition</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Using Supernote Nomad offline</title>
        <link href="https://foo.zone/gemfeed/2026-01-01-using-supernote-nomad-offline.html" />
        <id>https://foo.zone/gemfeed/2026-01-01-using-supernote-nomad-offline.html</id>
        <updated>2025-12-31T16:25:30+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I am a note taker. For years, I've been searching for a good digital device that could complement my paper notebooks. I've finally found it in the Supernote Nomad. I use it completely offline without cloud-sync, and in this post, I'll explain why this is a benefit.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='using-supernote-nomad-offline'>Using Supernote Nomad offline</h1><br />
<br />
<span class='quote'>Published at 2025-12-31T16:25:30+02:00</span><br />
<br />
<span>I am a note taker. For years, I&#39;ve been searching for a good digital device that could complement my paper notebooks. I&#39;ve finally found it in the Supernote Nomad. I use it completely offline without cloud-sync, and in this post, I&#39;ll explain why this is a benefit.</span><br />
<br />
<a class='textlink' href='https://supernote.com/pages/supernote-nomad'>Supernote Nomad</a><br />
<br />
<span>I initially bought it because Retta (the manufacturer of the Supernote) stated on their website that an open-source Linux firmware would be released soon. However, after over a year, there still hasn&#39;t been any progress (hopefully there will be someday). So I looked into alternative ways to use this device.</span><br />
<br />
<pre>
⣿⣿⣿⣿⣿⣿⡿⠿⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣏⠀⢶⣆⡘⠉⠙⠛⠿⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⠋⣤⣄⠘⠃⢠⣀⣀⠀⠀⠀⠀⠀⠉⠉⠛⠛⠿⢿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⡿⠀⡉⠻⡟⠀⠈⠉⠙⠛⠷⠶⣦⣤⣄⣀⠀⠀⠀⠀⠀⣾⣿⣿⣿⣿
⣿⣿⣿⣿⡄⠸⢿⣤⠀⢠⣤⣀⡀⠀⠀⠀⠀⠀⠉⠙⠛⠻⠶⠀⢰⣿⣿⠻⣿⣿
⣿⣿⣿⣿⠠⣶⣆⡉⠀⠀⠈⠉⠙⠛⠳⠶⠦⣤⣤⣄⣀⡀⢀⣴⠟⠋⠙⢷⣬⣿
⣿⣿⣿⠏⣠⡄⠹⠁⠰⢶⣤⣤⣀⡀⠀⠀⠀⠀⠀⠉⢉⣿⠟⠁⠀⠀⣠⣾⣿⣿
⣿⣿⡿⠂⠙⠻⡆⠀⠀⠀⠀⠈⠉⠛⠛⠷⠶⣦⣤⣴⠟⠁⠀⠀⣠⣾⣿⣿⣿⣿
⣿⣿⡇⠸⣿⣄⠀⠰⠶⢶⣤⣄⣀⡀⠀⠀⠀⣴⣟⠁⠀⠀⣠⣾⣿⣿⣿⣿⣿⣿
⣿⡟⠀⣶⣀⠃⠀⠀⠀⠀⠀⠈⠉⠙⠛⠓⢾⡟⢙⣷⣤⢾⣿⣿⣿⣿⣿⣿⣿⣿
⣿⠋⣀⡉⠻⠀⠘⠛⠻⠶⢶⣤⣤⣀⡀⢠⠿⠟⠛⠉⠁⣸⣿⣿⣿⣿⣿⣿⣿⣿
⣿⡀⠛⠳⠆⠀⠀⠀⠀⠀⠀⠀⠉⠉⠛⠛⠷⠶⣦⠄⢀⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣶⣦⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⣶⣤⣤⣀⣀⠀⠀⠀⢠⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣶⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#using-supernote-nomad-offline'>Using Supernote Nomad offline</a></li>
<li>⇢ <a href='#the-joy-of-being-offline'>The Joy of Being Offline</a></li>
<li>⇢ <a href='#my-offline-workflow'>My Offline Workflow</a></li>
<li>⇢ ⇢ <a href='#converting-notes-to-pdf'>Converting Notes to PDF</a></li>
<li>⇢ ⇢ <a href='#syncing-to-my-phone'>Syncing to my Phone</a></li>
<li>⇢ ⇢ <a href='#firmware-updates'>Firmware updates</a></li>
<li>⇢ <a href='#the-writing-experience'>The Writing Experience</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='the-joy-of-being-offline'>The Joy of Being Offline</h2><br />
<br />
<span>I keep my Supernote Nomad offline at all times. No Wi-Fi, no cloud sync, just me and my notes. And honestly, it&#39;s great.</span><br />
<br />
<span>With Wi-Fi off, the battery lasts about a week on a single charge (how convenient :-)).</span><br />
<br />
<span>Privacy was my main concern, though. I don&#39;t sync anything to Retta&#39;s cloud, so my notes stay mine. No one&#39;s reading or mining my stuff. Simple as that.</span><br />
<br />
<a href='./using-supernote-nomad-offline/nomad2.jpg'><img alt='A picture of the Supernote Nomad' title='A picture of the Supernote Nomad' src='./using-supernote-nomad-offline/nomad2.jpg' /></a><br />
<br />
<h2 style='display: inline' id='my-offline-workflow'>My Offline Workflow</h2><br />
<br />
<span>My workflow is simple, only relying on a direct USB connection to my Linux laptop.</span><br />
<br />
<span>I connect my Supernote Nomad to my Linux laptop via a USB-C cable. The device is automatically recognized as a storage device, and I can directly access the <span class='inlinecode'>Note</span> folder, which contains all my notes as <span class='inlinecode'>.note</span> files. I then copy these files to a dedicated archive folder on my laptop.</span><br />
<br />
<h3 style='display: inline' id='converting-notes-to-pdf'>Converting Notes to PDF</h3><br />
<br />
<span>To make my notes accessible and shareable, I convert them from the proprietary <span class='inlinecode'>.note</span> format to PDF. For this, I use a fantastic open-source tool called <span class='inlinecode'>supernote-tool</span>. It&#39;s not an official tool from Ratta, but it works flawlessly.</span><br />
<br />
<a class='textlink' href='https://github.com/jya-dev/supernote-tool'>https://github.com/jya-dev/supernote-tool</a><br />
<br />
<span>I&#39;ve created a small shell script to automate the conversion process using tis tool. This script, <span class='inlinecode'>convert-notes-to-pdfs.sh</span>, resides in my notes archive folder:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver">#!/usr/bin/env bash</font></i>

convert () {
  find . -name \*.note \
    | <b><u><font color="#000000">while</font></u></b> <b><u><font color="#000000">read</font></u></b> -r note; <b><u><font color="#000000">do</font></u></b>
        echo supernote-tool convert -a -t pdf <font color="#808080">"$note"</font> <font color="#808080">"${note/.note/.pdf}"</font>
        supernote-tool convert -a -t pdf <font color="#808080">"$note"</font> <font color="#808080">"${note/.note/.pdf}.tmp"</font>
        mv <font color="#808080">"${note/.note/.pdf}.tmp"</font> <font color="#808080">"${note/.note/.pdf}"</font>
        du -hs <font color="#808080">"$note"</font> <font color="#808080">"${note/.note/.pdf}"</font>
        echo
      <b><u><font color="#000000">done</font></u></b>
}

<i><font color="silver"># Make the PDFs available on my Phone as well</font></i>
copy () {
  <b><u><font color="#000000">if</font></u></b> [ ! -d ~/Documents/Supernote ]; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"Directory ~/Documents/Supernote does not exist, skipping"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
  <b><u><font color="#000000">fi</font></u></b>

  rsync -delete -av --include=<font color="#808080">'*/'</font> --include=<font color="#808080">'*.pdf'</font> --exclude=<font color="#808080">'*'</font> . ~/Documents/Supernote/
  echo This was copied from $(pwd) so dont edit manually &gt;~/Documents/Supernote/README.txt
}

convert
copy
</pre>
<br />
<span>This script does two things:</span><br />
<br />
<ul>
<li>It finds all <span class='inlinecode'>.note</span> files in the current directory and converts them to PDF using <span class='inlinecode'>supernote-tool</span>.</li>
<li>It copies the generated PDFs to my <span class='inlinecode'>~/Documents/Supernote</span> folder.</li>
</ul><br />
<h3 style='display: inline' id='syncing-to-my-phone'>Syncing to my Phone</h3><br />
<br />
<span>The <span class='inlinecode'>~/Documents/Supernote</span> folder on my laptop is synchronized with my phone using Syncthing. This way, I have access to all my notes in PDF format on my phone, wherever I go, without relying on any cloud service.</span><br />
<br />
<a class='textlink' href='https://syncthing.net/'>https://syncthing.net/</a><br />
<br />
<h3 style='display: inline' id='firmware-updates'>Firmware updates</h3><br />
<br />
<span>One usually updates the software or firmware of the Supernote Nomad via Wi-Fi. However, it is also possible to update it completely offline. To install the firmware update, follow the steps below (the following instructions were copied from the Supernote website):</span><br />
<br />
<ul>
<li>Connect your Supernote to your PC with a USB-C cable. For macOS, an MTP software (e.g. OpenMTP or Android File Transfer) is required for your Supernote to show up on your Mac. </li>
<li>For Manta, Nomad, A5 X and A6 X devices, copy the firmware (DO NOT UNZIP) to the "Export" folder of Supernote; for A5 and A6 devices, copy the firmware (DO NOT UNZIP) to the root directory of Supernote.</li>
<li>Unplug the USB connection, tap “OK” on your Supernote to continue, and if no prompt pops up, please restart your device directly to proceed to update.</li>
</ul><br />
<h2 style='display: inline' id='the-writing-experience'>The Writing Experience</h2><br />
<br />
<span>The writing feel of the Supernote Nomad is simply great. The combination of the screen&#39;s texture and the ceramic nib of the pen creates a feeling that is remarkably close to writing on real paper. The latency is almost non-existent, and the pressure sensitivity allows for a natural and expressive writing experience. It&#39;s great to write on, and it makes me want to take more notes.</span><br />
<br />
<a href='./using-supernote-nomad-offline/nomad1.jpg'><img alt='Another picture of the Supernote Nomad' title='Another picture of the Supernote Nomad' src='./using-supernote-nomad-offline/nomad1.jpg' /></a><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>The Supernote Nomad has become an additional tool for me. By using it offline, I&#39;ve created a distraction-free and private note-taking environment. The simple, manual workflow for transferring and converting notes gives me full control over my data, and the writing experience is second to none. If you&#39;re looking for a digital notebook that respects your privacy and helps you focus, I highly recommend giving the Supernote Nomad a try with an offline-first approach.</span><br />
<br />
<span>The Supernote didn&#39;t fully replace my traditional paper journals, though. Each of them has its own use case. However, that is outside the scope of this blog post.</span><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2026-01-01-using-supernote-nomad-offline.html'>2026-01-01 Using Supernote Nomad offline (You are currently reading this)</a><br />
<a class='textlink' href='./2026-01-01-cloudless-kobo-forma-with-koreader.html'>2026-01-01 Cloudless Kobo Forma with KOReader</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Posts from July to December 2025</title>
        <link href="https://foo.zone/gemfeed/2026-01-01-posts-from-july-to-december-2025.html" />
        <id>https://foo.zone/gemfeed/2026-01-01-posts-from-july-to-december-2025.html</id>
        <updated>2025-12-31T15:49:06+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Hello there, I wish you all a happy new year! These are my social media posts from the last six months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay. </summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='posts-from-july-to-december-2025'>Posts from July to December 2025</h1><br />
<br />
<span class='quote'>Published at 2025-12-31T15:49:06+02:00</span><br />
<br />
<span>Hello there, I wish you all a happy new year! These are my social media posts from the last six months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay. </span><br />
<br />
<span>These are from Mastodon and LinkedIn. Have a look at my about page for my social media profiles. This list is generated with Gos, my social media platform sharing tool.</span><br />
<br />
<a class='textlink' href='../about/index.html'>My about page</a><br />
<a class='textlink' href='https://codeberg.org/snonux/gos'>https://codeberg.org/snonux/gos</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#posts-from-july-to-december-2025'>Posts from July to December 2025</a></li>
<li>⇢ <a href='#july-2025'>July 2025</a></li>
<li>⇢ ⇢ <a href='#in-golang-values-are-actually-copied-when-'>In <span class='inlinecode'>#Golang</span>, values are actually copied when ...</a></li>
<li>⇢ ⇢ <a href='#same-experiences-i-had-but-it-s-a-time-saver-'>Same experiences I had, but it&#39;s a time saver. ...</a></li>
<li>⇢ ⇢ <a href='#we-programmers-all-use-them-i-hope-'>We (programmers) all use them (I hope): ...</a></li>
<li>⇢ ⇢ <a href='#shells-of-the-early-unices-didnt-understand-'>Shells of the early unices didnt understand ...</a></li>
<li>⇢ ⇢ <a href='#i-ve-picked-up-a-few-techniques-from-this-blog-'>I&#39;ve picked up a few techniques from this blog ...</a></li>
<li>⇢ ⇢ <a href='#i-ve-published-the-sixth-part-of-my-kubernetes-'>I&#39;ve published the sixth part of my "Kubernetes ...</a></li>
<li>⇢ ⇢ <a href='#the-book-coders-at-work-offers-a-fascinating-'>The book "Coders at Work" offers a fascinating ...</a></li>
<li>⇢ ⇢ <a href='#for-me-that-s-all-normal-couldn-t-imagine-a-'>For me, that&#39;s all normal. Couldn&#39;t imagine a ...</a></li>
<li>⇢ ⇢ <a href='#this-is-similar-to-my-dtail-project-it-got-'>This is similar to my <span class='inlinecode'>#dtail</span> project. It got ...</a></li>
<li>⇢ ⇢ <a href='#i-also-feel-the-most-comfortable-in-the-'>I also feel the most comfortable in the ...</a></li>
<li>⇢ ⇢ <a href='#i-have-been-enjoying-lately-as-an-alternative-'>I have been enjoying lately as an alternative ...</a></li>
<li>⇢ ⇢ <a href='#jonathan-s-reflection-of-10-years-of-'>Jonathan&#39;s reflection of 10 years of ...</a></li>
<li>⇢ ⇢ <a href='#some-neat-zero-copy-golang-tricks-here-'>Some neat zero-copy <span class='inlinecode'>#Golang</span> tricks here ...</a></li>
<li>⇢ ⇢ <a href='#what-was-it-like-working-at-gitlab-a-scary-'>What was it like working at GitLab? A scary ...</a></li>
<li>⇢ ⇢ <a href='#i-have-learned-a-lot-from-the-practical-ai-'>I have learned a lot from the Practical <span class='inlinecode'>#AI</span> ...</a></li>
<li>⇢ <a href='#august-2025'>August 2025</a></li>
<li>⇢ ⇢ <a href='#at-the-end-of-the-article-it-s-mentione-that-'>At the end of the article it&#39;s mentione that ...</a></li>
<li>⇢ ⇢ <a href='#great-blog-post-a-out-openbsdamsterdam-of-'>Great blog post a out <span class='inlinecode'>#OpenBSDAmsterdam</span>, of ...</a></li>
<li>⇢ ⇢ <a href='#interesting-llm-ai-slowdown-'>Interesting. <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#slowdown</span> ...</a></li>
<li>⇢ ⇢ <a href='#with-the-help-of-genai-i-could-generate-this-'>With the help of genai, I could generate this ...</a></li>
<li>⇢ ⇢ <a href='#i-tinkered-a-bit-with-local-llms-for-coding-'>I tinkered a bit with local LLMs for coding: ...</a></li>
<li>⇢ ⇢ <a href='#good-stuff-10-years-of-functional-options-and-'>Good stuff: 10 years of functional options and ...</a></li>
<li>⇢ ⇢ <a href='#top-5-performance-boosters-golang-'>Top 5 performance boosters <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#this-person-found-the-balance-although-i-'>This person found the balance.. although I ...</a></li>
<li>⇢ ⇢ <a href='#let-s-rewrite-all-slow-in-assembly-surely-'>Let&#39;s rewrite all slow in <span class='inlinecode'>#assembly</span>, surely ...</a></li>
<li>⇢ ⇢ <a href='#how-to-store-data-forever-storage-'>How to store data forever? <span class='inlinecode'>#storage</span> ...</a></li>
<li>⇢ ⇢ <a href='#no-wonder-that-almost-everyone-doing-something-'>No wonder, that almost everyone doing something ...</a></li>
<li>⇢ ⇢ <a href='#another-drawback-of-running-load-tests-in-a-'>Another drawback of running load tests in a ...</a></li>
<li>⇢ ⇢ <a href='#interesting-read-learnings-from-two-years-of-'>Interesting read Learnings from two years of ...</a></li>
<li>⇢ ⇢ <a href='#neat-little-story-a-school-girl-writing-her-'>Neat little story a school girl writing her ...</a></li>
<li>⇢ ⇢ <a href='#happy-that-i-am-not-yet-obsolete-llm-'>Happy, that I am not yet obsolete! <span class='inlinecode'>#llm</span> ...</a></li>
<li>⇢ <a href='#september-2025'>September 2025</a></li>
<li>⇢ ⇢ <a href='#loving-this-as-well-slackware-linux-'>Loving this as well: <span class='inlinecode'>#slackware</span> <span class='inlinecode'>#linux</span> ...</a></li>
<li>⇢ ⇢ <a href='#some-fun-random-weird-things-part-iii-blog-'>Some <span class='inlinecode'>#fun</span>: Random Weird Things Part III blog ...</a></li>
<li>⇢ ⇢ <a href='#yes-write-more-useless-software-i-agree-that-'>Yes, write more useless software. I agree that ...</a></li>
<li>⇢ ⇢ <a href='#i-learned-a-lot-from-this-openbsd-relayd-'>I learned a lot from this <span class='inlinecode'>#OpenBSD</span> <span class='inlinecode'>#relayd</span> ...</a></li>
<li>⇢ ⇢ <a href='#six-weeks-of-claude-code'>Six weeks of claude code</a></li>
<li>⇢ ⇢ <a href='#it-s-good-that-there-is-now-a-truly-open-source-'>It&#39;s good that there is now a truly open-source ...</a></li>
<li>⇢ ⇢ <a href='#have-to-try-this-at-some-point-'>Have to try this at some point ...</a></li>
<li>⇢ ⇢ <a href='#i-could-not-agree-more-for-me-a-personal-'>I could not agree more. For me, a personal ...</a></li>
<li>⇢ ⇢ <a href='#the-true-enterprise-developer-can-write-java-in-'>The true enterprise developer can write Java in ...</a></li>
<li>⇢ ⇢ <a href='#fx-is-a-neat-little-tool-for-viewing-json-'><span class='inlinecode'>#fx</span> is a neat little tool for viewing JSON ...</a></li>
<li>⇢ ⇢ <a href='#i-wish-i-had-as-much-time-as-this-guy-he-'>I wish I had as much time as this guy. He ...</a></li>
<li>⇢ ⇢ <a href='#what-exactly-was-the-point-of--xvar--'>What exactly was the point of [ “x$var” = ...</a></li>
<li>⇢ ⇢ <a href='#neat-zfs-feature-here-freebsd-which-i-'>Neat <span class='inlinecode'>#ZFS</span> feature (here <span class='inlinecode'>#FreeBSD</span>) which I ...</a></li>
<li>⇢ ⇢ <a href='#longer-hours-help-only-short-term-about-40-'>Longer hours help only short term. About 40 ...</a></li>
<li>⇢ ⇢ <a href='#you-could-also-use-bpf-instead-of-strace-'>You could also use <span class='inlinecode'>#bpf</span> instead of <span class='inlinecode'>#strace</span>, ...</a></li>
<li>⇢ ⇢ <a href='#some-great-things-are-approaching-bhyve-on-'>Some great things are approaching <span class='inlinecode'>#bhyve</span> on ...</a></li>
<li>⇢ ⇢ <a href='#another-synchronization-tool-part-of-the-'>Another synchronization tool part of the ...</a></li>
<li>⇢ ⇢ <a href='#too-many-open-files-linux-'>Too many open files <span class='inlinecode'>#linux</span> ...</a></li>
<li>⇢ ⇢ <a href='#just-posted-part-4-of-my-bash-golf-'>Just posted Part 4 of my <span class='inlinecode'>#Bash</span> <span class='inlinecode'>#Golf</span> ...</a></li>
<li>⇢ ⇢ <a href='#perl-is-like-a-swiss-army-knife-as-one-of-'><span class='inlinecode'>#Perl</span> is like a swiss army knife, as one of ...</a></li>
<li>⇢ ⇢ <a href='#personally-mainly-working-with-colorless-'>Personally, mainly working with colorless ...</a></li>
<li>⇢ ⇢ <a href='#how-do-gpus-work-usually-people-only-know-'>How do GPUs work? Usually, people only know ...</a></li>
<li>⇢ ⇢ <a href='#for-unattended-upgrades-you-must-have-a-good-'>For unattended upgrades you must have a good ...</a></li>
<li>⇢ ⇢ <a href='#surely-in-the-age-of-ai-and-llm-people-'>Surely, in the age of <span class='inlinecode'>#AI</span> and <span class='inlinecode'>#LLM</span>, people ...</a></li>
<li>⇢ ⇢ <a href='#on-ai-changes-everything-'>On <span class='inlinecode'>#AI</span> changes everything... ...</a></li>
<li>⇢ ⇢ <a href='#maps-in-go-under-the-hood-golang-'>Maps in Go under the hood <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#a-project-that-looks-complex-might-just-be-'>"A project that looks complex might just be ...</a></li>
<li>⇢ ⇢ <a href='#i-must-admit-that-partly-i-see-myself-there-'>I must admit that partly I see myself there ...</a></li>
<li>⇢ ⇢ <a href='#makes-me-think-of-good-old-times-where-i-'>Makes me think of good old times, where I ...</a></li>
<li>⇢ ⇢ <a href='#neat-little-blog-post-showcasing-various-'>Neat little blog post, showcasing various ...</a></li>
<li>⇢ ⇢ <a href='#share-didn-t-know-that-on-macos-besides-of-'>share Didn&#39;t know, that on MacOS, besides of ...</a></li>
<li>⇢ ⇢ <a href='#i-think-this-is-the-way-use-llms-for-code-you-'>I think this is the way: use LLMs for code you ...</a></li>
<li>⇢ ⇢ <a href='#always-enable-keepalive-i-d-say-most-of-the-'>Always enable keepalive? I&#39;d say most of the ...</a></li>
<li>⇢ ⇢ <a href='#i-just-finished-reading-chaos-engineering-by-'>I just finished reading "Chaos Engineering" by ...</a></li>
<li>⇢ ⇢ <a href='#fx-is-a-neat-and-tidy-command-line-tool-for-'>fx is a neat and tidy command-line tool for ...</a></li>
<li>⇢ ⇢ <a href='#some-nice-golang-tricks-there-'>Some nice <span class='inlinecode'>#Golang</span> tricks there ...</a></li>
<li>⇢ <a href='#october-2025'>October 2025</a></li>
<li>⇢ ⇢ <a href='#word-what-are-we-losing-with-ai-llm-ai-'>Word! What Are We Losing With AI? <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> ...</a></li>
<li>⇢ ⇢ <a href='#it-s-not-yet-time-for-the-friday-fun-but-'>It&#39;s not yet time for the friday <span class='inlinecode'>#fun</span>, but: ...</a></li>
<li>⇢ ⇢ <a href='#finally-i-retired-my-awsecs-setup-for-my-'>Finally, I retired my AWS/ECS setup for my ...</a></li>
<li>⇢ ⇢ <a href='#a-great-blog-post-about-my-favourite-text-'>A great blog post about my favourite text ...</a></li>
<li>⇢ ⇢ <a href='#one-of-the-more-confusing-parts-in-go-nil-'>One of the more confusing parts in Go, nil ...</a></li>
<li>⇢ ⇢ <a href='#strong-engineers-are-pragmatic-work-fast-have-'>Strong engineers are pragmatic, work fast, have ...</a></li>
<li>⇢ ⇢ <a href='#i-am-currently-binge-listening-to-the-google-'>I am currently binge-listening to the Google ...</a></li>
<li>⇢ ⇢ <a href='#looks-like-a-neat-library-for-writing-'>Looks like a neat library for writing ...</a></li>
<li>⇢ ⇢ <a href='#where-gen-ai-shines-is-the-generation-and-'>Where Gen AI shines is the generation and ...</a></li>
<li>⇢ ⇢ <a href='#at-work-everybody-is-replacable-some-with-a-'>At work, everybody is replacable. Some with a ...</a></li>
<li>⇢ ⇢ <a href='#i-actually-would-switch-back-to-freebsd-as-'>I actually would switch back to <span class='inlinecode'>#FreeBSD</span> as ...</a></li>
<li>⇢ ⇢ <a href='#amazing-print-is-amazing-'>Amazing Print is amazing ...</a></li>
<li>⇢ ⇢ <a href='#always-worth-a-reminde-what-are-bloom-filters-'>Always worth a reminde, what are bloom filters ...</a></li>
<li>⇢ ⇢ <a href='#some-ruby-book-notes-of-mine-'>Some <span class='inlinecode'>#Ruby</span> book notes of mine: ...</a></li>
<li>⇢ ⇢ <a href='#sad-story-work-scrum-jira-'>Sad story. <span class='inlinecode'>#work</span> <span class='inlinecode'>#scrum</span> <span class='inlinecode'>#jira</span> ...</a></li>
<li>⇢ ⇢ <a href='#one-of-my-favorite-books-some-thoughts-on-'>One of my favorite books: "Some Thoughts on ...</a></li>
<li>⇢ ⇢ <a href='#ltex-ls-is-great-for-integrating-'>ltex-ls is great for integrating ...</a></li>
<li>⇢ ⇢ <a href='#supernote-tool-is-awesome-as-i-can-now-'>supernote-tool is awesome, as I can now ...</a></li>
<li>⇢ ⇢ <a href='#fun-story---the-case-of-the-500-mile-email-'>Fun story! :-) The case of the 500-mile email ...</a></li>
<li>⇢ ⇢ <a href='#operating-myself-some-software-over-10-years-of-'>Operating myself some software over 10 years of ...</a></li>
<li>⇢ ⇢ <a href='#git-worktrees-are-awesome-'><span class='inlinecode'>#git</span> worktrees are awesome! ...</a></li>
<li>⇢ ⇢ <a href='#llms-for-anomaly-detection-while-some-'>LLMs for anomaly detection? "While some ...</a></li>
<li>⇢ ⇢ <a href='#after-having-heavily-vibe-coded-personal-pet-'>After having heavily vibe-coded (personal pet ...</a></li>
<li>⇢ ⇢ <a href='#slowly-one-after-another-i-am-switching-all-'>Slowly, one after another, I am switching all ...</a></li>
<li>⇢ ⇢ <a href='#some-neat-slice-tricks-for-go-golang-'>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#i-spent-way-too-much-time-on-this-site-it-s-'>I spent way too much time on this site. It&#39;s ...</a></li>
<li>⇢ ⇢ <a href='#i-share-similar-experiences-with-rust-but-i-'>I share similar experiences with <span class='inlinecode'>#rust</span>, but I ...</a></li>
<li>⇢ ⇢ <a href='#pipelines-in-go-using-channels-golang-'>Pipelines in Go using channels. <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#some-nifty-ruby-tricks-in-my-opinion-ruby-'>Some nifty <span class='inlinecode'>#Ruby</span> tricks: In my opinion, Ruby ...</a></li>
<li>⇢ ⇢ <a href='#reflects-my-experience-'>Reflects my experience ...</a></li>
<li>⇢ ⇢ <a href='#i-like-the-fact-that-markdown-fikes-a-rcs-an-'>I like the fact that Markdown fikes, a RCS. an ...</a></li>
<li>⇢ ⇢ <a href='#rich-interactive-widgets-for-terminal-uis-it-'>Rich Interactive Widgets for Terminal UIs, it ...</a></li>
<li>⇢ ⇢ <a href='#always-fun-to-dig-in-the-perl-perl-woods-'>Always fun to dig in the <span class='inlinecode'>#Perl</span> @Perl woods. ...</a></li>
<li>⇢ ⇢ <a href='#how-does-virtual-memory-work-ram-'>How does <span class='inlinecode'>#virtual</span> <span class='inlinecode'>#memory</span> work? <span class='inlinecode'>#ram</span> ...</a></li>
<li>⇢ ⇢ <a href='#flamelens---an-interactive-flamegraph-viewer-in-'>flamelens - An interactive flamegraph viewer in ...</a></li>
<li>⇢ ⇢ <a href='#you-can-now-run-ansible-playbooks-and-shell-'>You can now run Ansible Playbooks and shell ...</a></li>
<li>⇢ ⇢ <a href='#for-people-working-with-k8s-this-tool-is-'>For people working with <span class='inlinecode'>#k8s</span>, this tool is ...</a></li>
<li>⇢ <a href='#november-2025'>November 2025</a></li>
<li>⇢ ⇢ <a href='#yes-using-the-right-tool-for-the-job-and-'>Yes, using the right <span class='inlinecode'>#tool</span> for the job and ...</a></li>
<li>⇢ ⇢ <a href='#some-neat-go-tricks-golang-'>Some neat Go tricks: <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#there-are-some-truths-in-this-sre-article-'>There are some truths in this <span class='inlinecode'>#SRE</span> article: ...</a></li>
<li>⇢ ⇢ <a href='#the-go-flight-recorder-is-a-tool-that-allows-'>The Go flight recorder is a tool that allows ...</a></li>
<li>⇢ ⇢ <a href='#this-is-useful-golang-'>This is useful <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#great-visually-animated-guide-how-raft-'>Great visually animated guide how <span class='inlinecode'>#raft</span> ...</a></li>
<li>⇢ ⇢ <a href='#todays-junior-devs-who-skip-the-hard-'>"Today’s junior devs who skip the “hard ...</a></li>
<li>⇢ ⇢ <a href='#i-actually-enjoyed-readong-through-the-fish-'>I actually enjoyed readong through the <span class='inlinecode'>#Fish</span> ...</a></li>
<li>⇢ ⇢ <a href='#there-can-be-many-things-which-can-go-wrong-'>There can be many things which can go wrong, ...</a></li>
<li>⇢ ⇢ <a href='#imho-motivation-is-not-always-enough-there-'>IMHO, motivation is not always enough. There ...</a></li>
<li>⇢ ⇢ <a href='#have-been-generating-those-cpu-flame-graphs-on-'>Have been generating those CPU flame graphs on ...</a></li>
<li>⇢ ⇢ <a href='#i-personally-don-t-like-the-typical-whiteboard-'>I personally don&#39;t like the typical whiteboard ...</a></li>
<li>⇢ ⇢ <a href='#if-you-ve-wondered-how-cpus-and-operating-'>If you&#39;ve wondered how CPUs and operating ...</a></li>
<li>⇢ ⇢ <a href='#and-there-s-an-unexpected-winner---erlang-'>And there&#39;s an unexpected winner :-) <span class='inlinecode'>#erlang</span> ...</a></li>
<li>⇢ ⇢ <a href='#is-it-it-this-is-it-what-is-it-in-ruby-34-'>Is it it? This is it. What Is It (in Ruby 3.4)? ...</a></li>
<li>⇢ ⇢ <a href='#from-my-recent-london-trip-i-ve-uploaded-'>From my recent <span class='inlinecode'>#London</span> trip, I&#39;ve uploaded ...</a></li>
<li>⇢ ⇢ <a href='#agreed-you-should-make-your-own-programming-'>Agreed, you should make your own programming ...</a></li>
<li>⇢ ⇢ <a href='#principles-for-c-programming-c-'>Principles for C programming <span class='inlinecode'>#C</span> ...</a></li>
<li>⇢ ⇢ <a href='#typst-appears-to-be-a-great-modern-'><span class='inlinecode'>#Typst</span> appears to be a great modern ...</a></li>
<li>⇢ ⇢ <a href='#things-you-can-do-with-a-debugger-but-not-with-'>Things you can do with a debugger but not with ...</a></li>
<li>⇢ ⇢ <a href='#neat-tutorial-i-think-i-ve-to-try-jujutsu-'>Neat tutorial, I think I&#39;ve to try <span class='inlinecode'>#jujutsu</span> ...</a></li>
<li>⇢ ⇢ <a href='#wise-words-best-practices-are-not-rules-they-'>Wise words Best practices are not rules. They ...</a></li>
<li>⇢ ⇢ <a href='#how-to-build-a-linux-container-from-'>How to build a <span class='inlinecode'>#Linux</span> <span class='inlinecode'>#Container</span> from ...</a></li>
<li>⇢ ⇢ <a href='#when-i-reach-the-point-where-i-am-trying-to-'>When I reach the point where I am trying to ...</a></li>
<li>⇢ ⇢ <a href='#personally-one-of-the-main-benefits-of-using-'>Personally one of the main benefits of using ...</a></li>
<li>⇢ <a href='#december-2025'>December 2025</a></li>
<li>⇢ ⇢ <a href='#rhese-are-some-nice-ruby-tricks-ruby-is-onw-'>Rhese are some nice <span class='inlinecode'>#Ruby</span> tricks (Ruby is onw ...</a></li>
<li>⇢ ⇢ <a href='#that-s-fun-use-the-c-preprocessor-as-a-html-'>That&#39;s fun, use the C preprocessor as a HTML ...</a></li>
<li>⇢ ⇢ <a href='#jq-but-for-markdown-thats-interesting-'><span class='inlinecode'>#jq</span> but for <span class='inlinecode'>#Markdown</span>? Thats interesting, ...</a></li>
<li>⇢ ⇢ <a href='#elvish-seems-to-be-a-neat-little-shell-it-s-'>Elvish seems to be a neat little shell. It&#39;s ...</a></li>
<li>⇢ ⇢ <a href='#google-sre-required-better-wifi-on-the-'>Google <span class='inlinecode'>#SRE</span> required better Wifi on the ...</a></li>
<li>⇢ ⇢ <a href='#indeed-'>Indeed ...</a></li>
<li>⇢ ⇢ <a href='#very-interesting-post-how-pods-are-scheduled-'>Very interesting post how pods are scheduled ...</a></li>
<li>⇢ ⇢ <a href='#i-have-added-observability-to-the-kubernetes-'>I have added observability to the <span class='inlinecode'>#Kubernetes</span> ...</a></li>
<li>⇢ ⇢ <a href='#wondering-where-i-could-make-use-of-it-'>Wondering where I could make use of it ...</a></li>
<li>⇢ ⇢ <a href='#trying-out-cosmic-desktop-seems-'>Trying out <span class='inlinecode'>#COSMIC</span> <span class='inlinecode'>#Desktop</span>... seems ...</a></li>
<li>⇢ ⇢ <a href='#best-thing-i-ve-ever-read-about-container-'>Best thing I&#39;ve ever read about <span class='inlinecode'>#container</span> ...</a></li>
<li>⇢ ⇢ <a href='#while-acknowledging-luck-in-finding-the-right-'>While acknowledging luck in finding the right ...</a></li>
<li>⇢ ⇢ <a href='#great-explanation-slo-sla-sli-sre-'>Great explanation <span class='inlinecode'>#slo</span> <span class='inlinecode'>#sla</span> <span class='inlinecode'>#sli</span> <span class='inlinecode'>#sre</span> ...</a></li>
<li>⇢ ⇢ <a href='#nice-service-you-send-a-drive-they-host-'>Nice service, you send a drive, they host ...</a></li>
</ul><br />
<h2 style='display: inline' id='july-2025'>July 2025</h2><br />
<br />
<h3 style='display: inline' id='in-golang-values-are-actually-copied-when-'>In <span class='inlinecode'>#Golang</span>, values are actually copied when ...</h3><br />
<br />
<span>In <span class='inlinecode'>#Golang</span>, values are actually copied when assigned (boxed) into an interface. That can have performance impact.</span><br />
<br />
<a class='textlink' href='https://goperf.dev/01-common-patterns/interface-boxing/'>goperf.dev/01-common-patterns/interface-boxing/</a><br />
<br />
<h3 style='display: inline' id='same-experiences-i-had-but-it-s-a-time-saver-'>Same experiences I had, but it&#39;s a time saver. ...</h3><br />
<br />
<span>Same experiences I had, but it&#39;s a time saver. and when done correctly, those tools are amazing: <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://lucumr.pocoo.org/2025/06/21/my-first-ai-library/'>lucumr.pocoo.org/2025/06/21/my-first-ai-library/</a><br />
<br />
<h3 style='display: inline' id='we-programmers-all-use-them-i-hope-'>We (programmers) all use them (I hope): ...</h3><br />
<br />
<span>We (programmers) all use them (I hope): language servers. LSP stands for Language Server Protocol, which standardizes communication between coding editors or IDEs and language servers, facilitating features like autocompletion, refactoring, linting, error-checking, etc.... It&#39;s interesting to look under the hood a little bit to see how your code editor actually communicates with a language server. <span class='inlinecode'>#LSP</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://packagemain.tech/p/understanding-the-language-server-protocol'>packagemain.tech/p/understanding-the-language-server-protocol</a><br />
<br />
<h3 style='display: inline' id='shells-of-the-early-unices-didnt-understand-'>Shells of the early unices didnt understand ...</h3><br />
<br />
<span>Shells of the early unices didnt understand file globbing, that was done by the external glob command! <span class='inlinecode'>#unix</span> <span class='inlinecode'>#history</span> <span class='inlinecode'>#shell</span></span><br />
<br />
<a class='textlink' href='https://utcc.utoronto.ca/%7Ecks/space/blog/unix/EtcGlobHistory'>utcc.utoronto.ca/%7Ecks/space/blog/unix/EtcGlobHistory</a><br />
<br />
<h3 style='display: inline' id='i-ve-picked-up-a-few-techniques-from-this-blog-'>I&#39;ve picked up a few techniques from this blog ...</h3><br />
<br />
<span>I&#39;ve picked up a few techniques from this blog post and found them worth sharing here: <span class='inlinecode'>#ai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#prompting</span> <span class='inlinecode'>#techniques</span></span><br />
<br />
<a class='textlink' href='https://cracking-ai-engineering.com/writing/2025/07/07/four-prompting-paradigms/'>cracking-ai-engineering.com/writing/2025/07/07/four-prompting-paradigms/</a><br />
<br />
<h3 style='display: inline' id='i-ve-published-the-sixth-part-of-my-kubernetes-'>I&#39;ve published the sixth part of my "Kubernetes ...</h3><br />
<br />
<span>I&#39;ve published the sixth part of my "Kubernetes with FreeBSD" blog series. This time, I set up the storage, which will be used with persistent volume claims later on in the Kubernetes cluster. Have a lot of fun! <span class='inlinecode'>#freebsd</span> <span class='inlinecode'>#nfs</span> <span class='inlinecode'>#ha</span> <span class='inlinecode'>#zfs</span> <span class='inlinecode'>#zrepl</span> <span class='inlinecode'>#carp</span> <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#k3s</span> <span class='inlinecode'>#homelab</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.gmi'>foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html</a><br />
<br />
<h3 style='display: inline' id='the-book-coders-at-work-offers-a-fascinating-'>The book "Coders at Work" offers a fascinating ...</h3><br />
<br />
<span>The book "Coders at Work" offers a fascinating glimpse into how programming legends emerged in the early days of computing. I especially enjoyed the personal stories and insights. It would be great to see a new edition reflecting today’s AI and LLM revolution—so much has changed since!</span><br />
<br />
<a class='textlink' href='https://www.goodreads.com/book/show/6713575-coders-at-work'>www.goodreads.com/book/show/6713575-coders-at-work</a><br />
<br />
<h3 style='display: inline' id='for-me-that-s-all-normal-couldn-t-imagine-a-'>For me, that&#39;s all normal. Couldn&#39;t imagine a ...</h3><br />
<br />
<span>For me, that&#39;s all normal. Couldn&#39;t imagine a simpler job. <span class='inlinecode'>#software</span></span><br />
<br />
<a class='textlink' href='https://0x1.pt/2025/04/06/the-insanity-of-being-a-software-engineer/'>0x1.pt/2025/04/06/the-insanity-of-being-a-software-engineer/</a><br />
<br />
<h3 style='display: inline' id='this-is-similar-to-my-dtail-project-it-got-'>This is similar to my <span class='inlinecode'>#dtail</span> project. It got ...</h3><br />
<br />
<span>This is similar to my <span class='inlinecode'>#dtail</span> project. It got some features, which dtail doesnt, and dtail has some features, which <span class='inlinecode'>#nerdlog</span> hasnt. But the principle is the same, both tools don&#39;t have a centralised log store and both use SSH to connect to the servers (sources of the logs) directly.</span><br />
<br />
<a class='textlink' href='https://github.com/dimonomid/nerdlog'>github.com/dimonomid/nerdlog</a><br />
<br />
<h3 style='display: inline' id='i-also-feel-the-most-comfortable-in-the-'>I also feel the most comfortable in the ...</h3><br />
<br />
<span>I also feel the most comfortable in the <span class='inlinecode'>#terminal</span>. There are a few high-level tools where it doesn&#39;t make always a lot of sense like web-browsing most of the web, but for most of the things I do, I prefer the terminal. I think it&#39;s a good idea to have a terminal-based interface for most of the things you do. It makes it easier to automate things and to work with other tools.</span><br />
<br />
<a class='textlink' href='https://lambdaland.org/posts/2025-05-13_real_programmers/'>lambdaland.org/posts/2025-05-13_real_programmers/</a><br />
<br />
<h3 style='display: inline' id='i-have-been-enjoying-lately-as-an-alternative-'>I have been enjoying lately as an alternative ...</h3><br />
<br />
<span>I have been enjoying lately as an alternative TUI to Claude Code CLI. It is a 100% open-source agentic coding tool, which supports all models from including local ones (e.g. DeepSeek), and has got some nice tweaks like side-by-side diffs and you can also use your favourite text $EDITOR for prompt editing! Highly recommend! <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span> <span class='inlinecode'>#agentic</span> <span class='inlinecode'>#ai</span></span><br />
<br />
<a class='textlink' href='https://opencode.ai'>opencode.ai</a><br />
<a class='textlink' href='https://models.dev'>models.dev</a><br />
<br />
<h3 style='display: inline' id='jonathan-s-reflection-of-10-years-of-'>Jonathan&#39;s reflection of 10 years of ...</h3><br />
<br />
<span>Jonathan&#39;s reflection of 10 years of programming!</span><br />
<br />
<a class='textlink' href='https://jonathan-frere.com/posts/10-years-of-programming/'>jonathan-frere.com/posts/10-years-of-programming/</a><br />
<br />
<h3 style='display: inline' id='some-neat-zero-copy-golang-tricks-here-'>Some neat zero-copy <span class='inlinecode'>#Golang</span> tricks here ...</h3><br />
<br />
<span>Some neat zero-copy <span class='inlinecode'>#Golang</span> tricks here</span><br />
<br />
<a class='textlink' href='https://goperf.dev/01-common-patterns/zero-copy/'>goperf.dev/01-common-patterns/zero-copy/</a><br />
<br />
<h3 style='display: inline' id='what-was-it-like-working-at-gitlab-a-scary-'>What was it like working at GitLab? A scary ...</h3><br />
<br />
<span>What was it like working at GitLab? A scary moment was the deletion of the gitlab.com database, though fortunately, there was a six-hour-old copy on the staging server. More people don&#39;t necessarily produce better results. Additionally, Ruby&#39;s metaprogramming isn&#39;t ideal for large projects. A burnout. And many more insights....</span><br />
<br />
<a class='textlink' href='https://yorickpeterse.com/articles/what-it-was-like-working-for-gitlab/'>yorickpeterse.com/articles/what-it-was-like-working-for-gitlab/</a><br />
<br />
<h3 style='display: inline' id='i-have-learned-a-lot-from-the-practical-ai-'>I have learned a lot from the Practical <span class='inlinecode'>#AI</span> ...</h3><br />
<br />
<span>I have learned a lot from the Practical <span class='inlinecode'>#AI</span> <span class='inlinecode'>#podcast</span>, especially from episode 312, which discusses the <span class='inlinecode'>#MCP</span> (model context protocol). Are there any MCP servers you plan to use or to build?</span><br />
<br />
<a class='textlink' href='https://practicalai.fm/312'>practicalai.fm/312</a><br />
<br />
<h2 style='display: inline' id='august-2025'>August 2025</h2><br />
<br />
<h3 style='display: inline' id='at-the-end-of-the-article-it-s-mentione-that-'>At the end of the article it&#39;s mentione that ...</h3><br />
<br />
<span>At the end of the article it&#39;s mentione that it&#39;s difficult to stay in the zone when AI does the coding for you. I think it&#39;s possible to stay in the zon, but only when you use AI surgically. <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/cursor-makes-developers-less-effective?publication_id=458709&amp;post_id=169160664&amp;isFreemail=true&amp;r=4ijqut&amp;triedRedirect=true'>newsletter.pragmaticengineer.com/p/cur..-..email=true&amp;r=4ijqut&amp;triedRedirect=true</a><br />
<br />
<h3 style='display: inline' id='great-blog-post-a-out-openbsdamsterdam-of-'>Great blog post a out <span class='inlinecode'>#OpenBSDAmsterdam</span>, of ...</h3><br />
<br />
<span>Great blog post a out <span class='inlinecode'>#OpenBSDAmsterdam</span>, of which I am a customer too for some years now. <span class='inlinecode'>#OpenBSD</span></span><br />
<br />
<a class='textlink' href='https://www.tumfatig.net/2025/cruising-a-vps-at-openbsd-amsterdam/'>www.tumfatig.net/2025/cruising-a-vps-at-openbsd-amsterdam/</a><br />
<br />
<h3 style='display: inline' id='interesting-llm-ai-slowdown-'>Interesting. <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#slowdown</span> ...</h3><br />
<br />
<span>Interesting. <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#slowdown</span></span><br />
<br />
<a class='textlink' href='https://m.slashdot.org/story/444304'>m.slashdot.org/story/444304</a><br />
<br />
<h3 style='display: inline' id='with-the-help-of-genai-i-could-generate-this-'>With the help of genai, I could generate this ...</h3><br />
<br />
<span>With the help of genai, I could generate this neat small showcase site, of many of my small to medium sized side projects. The projects descriptions were generated by Claude Code CLI with Sonnet 4 based on the git repo contents. The page content by <span class='inlinecode'>gitsyncer</span>, a tool I created (listed on the showcase page as well) and <span class='inlinecode'>gemtexter</span>, which did the HTML generation part (another tool I wrote, listed on the showcase page as well). The stats seem neat, over time a lot of stuff starts to pile up! With the age of AI (so far, only 8 projects were created AI-assisted), I think more projects will spin up faster (not just for me, but for everyone working on side projects). I have more (older) side projects archived on my local NAS, but they are not worth digging out... 📦 Total Projects: 55 📊 Total Commits: 10,379 📈 Total Lines of Code: 252,969 📄 Total Lines of Documentation: 24,167 💻 Languages: Java (22.4%), Go (17.6%), HTML (14.0%), C++ (8.9%), C (7.3%), Perl (6.3%), Shell (6.3%), C/C++ (5.8%), XML (4.6%), Config (1.5%), Ruby (1.1%), HCL (1.1%), Make (0.7%), Python (0.6%), CSS (0.6%), JSON (0.3%), Raku (0.3%), Haskell (0.2%), YAML (0.2%), TOML (0.1%) 📚 Documentation: Text (47.4%), Markdown (38.4%), LaTeX (14.2%) 🤖 AI-Assisted Projects: 8 out of 55 (14.5% AI-assisted, 85.5% human-only) 🚀 Release Status: 31 released, 24 experimental (56.4% with releases, 43.6% experimental) <span class='inlinecode'>#llm</span> <span class='inlinecode'>#genai</span> <span class='inlinecode'>#showcase</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/about/showcase.gmi'>foo.zone/about/showcase.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/about/showcase.html'>foo.zone/about/showcase.html</a><br />
<br />
<h3 style='display: inline' id='i-tinkered-a-bit-with-local-llms-for-coding-'>I tinkered a bit with local LLMs for coding: ...</h3><br />
<br />
<span>I tinkered a bit with local LLMs for coding: <span class='inlinecode'>#llm</span> <span class='inlinecode'>#local</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#ollama</span> <span class='inlinecode'>#qwen</span> <span class='inlinecode'>#deepseek</span> <span class='inlinecode'>#HelixEditor</span> <span class='inlinecode'>#LSP</span> <span class='inlinecode'>#codecompletion</span> <span class='inlinecode'>#aider</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.gmi'>foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.html'>foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.html</a><br />
<br />
<h3 style='display: inline' id='good-stuff-10-years-of-functional-options-and-'>Good stuff: 10 years of functional options and ...</h3><br />
<br />
<span>Good stuff: 10 years of functional options and key lessons Learned along the way <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.bytesizego.com/blog/10-years-functional-options-golang'>www.bytesizego.com/blog/10-years-functional-options-golang</a><br />
<br />
<h3 style='display: inline' id='top-5-performance-boosters-golang-'>Top 5 performance boosters <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Top 5 performance boosters <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/go-performance-boosters-the-top-5'>blog.devtrovert.com/p/go-performance-boosters-the-top-5</a><br />
<br />
<h3 style='display: inline' id='this-person-found-the-balance-although-i-'>This person found the balance.. although I ...</h3><br />
<br />
<span>This person found the balance.. although I would use a different code editor: Why Open Source Maintainers Thrive in the LLM Era via @wallabagapp <span class='inlinecode'>#ai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://mikemcquaid.com/why-open-source-maintainers-thrive-in-the-llm-era/'>mikemcquaid.com/why-open-source-maintainers-thrive-in-the-llm-era/</a><br />
<br />
<h3 style='display: inline' id='let-s-rewrite-all-slow-in-assembly-surely-'>Let&#39;s rewrite all slow in <span class='inlinecode'>#assembly</span>, surely ...</h3><br />
<br />
<span>Let&#39;s rewrite all slow in <span class='inlinecode'>#assembly</span>, surely it&#39;s not just about the language but also about the architecture and the algorithms used. Still, impressive.</span><br />
<br />
<a class='textlink' href='https://x.com/FFmpeg/status/1945478331077374335'>x.com/FFmpeg/status/1945478331077374335</a><br />
<br />
<h3 style='display: inline' id='how-to-store-data-forever-storage-'>How to store data forever? <span class='inlinecode'>#storage</span> ...</h3><br />
<br />
<span>How to store data forever? <span class='inlinecode'>#storage</span> <span class='inlinecode'>#archiving</span></span><br />
<br />
<a class='textlink' href='https://drewdevault.com/2020/04/22/How-to-store-data-forever.html'>drewdevault.com/2020/04/22/How-to-store-data-forever.html</a><br />
<br />
<h3 style='display: inline' id='no-wonder-that-almost-everyone-doing-something-'>No wonder, that almost everyone doing something ...</h3><br />
<br />
<span>No wonder, that almost everyone doing something with AI is releasing their own aentic coding tool now. As it&#39;s so dead simple to write one. <span class='inlinecode'>#ai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#agenticcoding</span></span><br />
<br />
<a class='textlink' href='https://ampcode.com/how-to-build-an-agent'>ampcode.com/how-to-build-an-agent</a><br />
<br />
<h3 style='display: inline' id='another-drawback-of-running-load-tests-in-a-'>Another drawback of running load tests in a ...</h3><br />
<br />
<span>Another drawback of running load tests in a pre-prod environment is that it is not always possible to reproduce production load, especially in a complex environment. I personally prefer a combination of pre-prod load testing, production canaries, and gradual production deployment. What are your thoughts? <span class='inlinecode'>#sre</span> <span class='inlinecode'>#loadtesting</span> <span class='inlinecode'>#lt</span> <span class='inlinecode'>#loadtesting</span></span><br />
<br />
<a class='textlink' href='https://thefridaydeploy.substack.com/p/load-testing-prepare-for-the-growth'>thefridaydeploy.substack.com/p/load-testing-prepare-for-the-growth</a><br />
<br />
<h3 style='display: inline' id='interesting-read-learnings-from-two-years-of-'>Interesting read Learnings from two years of ...</h3><br />
<br />
<span>Interesting read Learnings from two years of using AI tools for software engineering <span class='inlinecode'>#ai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#genai</span></span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/two-years-of-using-ai'>newsletter.pragmaticengineer.com/p/two-years-of-using-ai</a><br />
<br />
<h3 style='display: inline' id='neat-little-story-a-school-girl-writing-her-'>Neat little story a school girl writing her ...</h3><br />
<br />
<span>Neat little story a school girl writing her first (and only) malware and have it infected her school.</span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/that-time-i-wrote-malware/'>ntietz.com/blog/that-time-i-wrote-malware/</a><br />
<br />
<h3 style='display: inline' id='happy-that-i-am-not-yet-obsolete-llm-'>Happy, that I am not yet obsolete! <span class='inlinecode'>#llm</span> ...</h3><br />
<br />
<span>Happy, that I am not yet obsolete! <span class='inlinecode'>#llm</span> <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://clickhouse.com/blog/llm-observability-challenge'>clickhouse.com/blog/llm-observability-challenge</a><br />
<br />
<h2 style='display: inline' id='september-2025'>September 2025</h2><br />
<br />
<h3 style='display: inline' id='loving-this-as-well-slackware-linux-'>Loving this as well: <span class='inlinecode'>#slackware</span> <span class='inlinecode'>#linux</span> ...</h3><br />
<br />
<span>Loving this as well: <span class='inlinecode'>#slackware</span> <span class='inlinecode'>#linux</span></span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/142145/what-makes-slackware-different/'>www.osnews.com/story/142145/what-makes-slackware-different/</a><br />
<br />
<h3 style='display: inline' id='some-fun-random-weird-things-part-iii-blog-'>Some <span class='inlinecode'>#fun</span>: Random Weird Things Part III blog ...</h3><br />
<br />
<span>Some <span class='inlinecode'>#fun</span>: Random Weird Things Part III blog post</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-08-15-random-weird-things-iii.gmi'>foo.zone/gemfeed/2025-08-15-random-weird-things-iii.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-08-15-random-weird-things-iii.html'>foo.zone/gemfeed/2025-08-15-random-weird-things-iii.html</a><br />
<br />
<h3 style='display: inline' id='yes-write-more-useless-software-i-agree-that-'>Yes, write more useless software. I agree that ...</h3><br />
<br />
<span>Yes, write more useless software. I agree that play has a vital role in learning and experimentation. Also, programming is a lot of fun this way. I&#39;ve learned programming mostly by writing useless software or almost useful tools for myself, but I can now apply all that knowledge to real work as well. <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/write-more-useless-software/'>ntietz.com/blog/write-more-useless-software/</a><br />
<br />
<h3 style='display: inline' id='i-learned-a-lot-from-this-openbsd-relayd-'>I learned a lot from this <span class='inlinecode'>#OpenBSD</span> <span class='inlinecode'>#relayd</span> ...</h3><br />
<br />
<span>I learned a lot from this <span class='inlinecode'>#OpenBSD</span> <span class='inlinecode'>#relayd</span> talk, and I already put the information into production! I know the excellent OpenBSD manual pages document everything, but it is a bit different when you see it presented in a talk.</span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=yW8QSZyEs6E'>www.youtube.com/watch?v=yW8QSZyEs6E</a><br />
<br />
<h3 style='display: inline' id='six-weeks-of-claude-code'>Six weeks of claude code</h3><br />
<br />
<a class='textlink' href='https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/'>blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/</a><br />
<br />
<h3 style='display: inline' id='it-s-good-that-there-is-now-a-truly-open-source-'>It&#39;s good that there is now a truly open-source ...</h3><br />
<br />
<span>It&#39;s good that there is now a truly open-source LLM model; I&#39;m just wondering how it will perform. The difference compared to other open models is that the others only provide open weights, but you can&#39;t reproduce the training! That issue would be solved with this Swiss model. I will definitively have a look! <span class='inlinecode'>#llm</span> <span class='inlinecode'>#opensource</span> <span class='inlinecode'>#privacy</span></span><br />
<br />
<a class='textlink' href='https://m.slashdot.org/story/446310'>m.slashdot.org/story/446310</a><br />
<br />
<h3 style='display: inline' id='have-to-try-this-at-some-point-'>Have to try this at some point ...</h3><br />
<br />
<span>Have to try this at some point, troubleshooting <span class='inlinecode'>#k8s</span> with the help of <span class='inlinecode'>#genai</span></span><br />
<br />
<a class='textlink' href='https://blog.palark.com/k8sgpt-ai-troubleshooting-kubernetes/'>blog.palark.com/k8sgpt-ai-troubleshooting-kubernetes/</a><br />
<br />
<h3 style='display: inline' id='i-could-not-agree-more-for-me-a-personal-'>I could not agree more. For me, a personal ...</h3><br />
<br />
<span>I could not agree more. For me, a personal (tech oriented) website is not a business contact card, but a playground to experience and learn with/about technologies. The Value of a Personal Site <span class='inlinecode'>#website</span> <span class='inlinecode'>#personal</span> <span class='inlinecode'>#tech</span></span><br />
<br />
<a class='textlink' href='https://atthis.link/blog/2021/personalsite.html'>atthis.link/blog/2021/personalsite.html</a><br />
<br />
<h3 style='display: inline' id='the-true-enterprise-developer-can-write-java-in-'>The true enterprise developer can write Java in ...</h3><br />
<br />
<span>The true enterprise developer can write Java in any language. <span class='inlinecode'>#java</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<h3 style='display: inline' id='fx-is-a-neat-little-tool-for-viewing-json-'><span class='inlinecode'>#fx</span> is a neat little tool for viewing JSON ...</h3><br />
<br />
<span><span class='inlinecode'>#fx</span> is a neat little tool for viewing JSON files!</span><br />
<br />
<a class='textlink' href='https://fx.wtf'>fx.wtf</a><br />
<br />
<h3 style='display: inline' id='i-wish-i-had-as-much-time-as-this-guy-he-'>I wish I had as much time as this guy. He ...</h3><br />
<br />
<span>I wish I had as much time as this guy. He writes entire operating systems, including a Unix clone called "Bunnix" in a month. He is also the inventor of the Hare programming language (If I am not wrong). Now, he is also creating a new shell, primarily for his other operating systems and kernels he is working on. <span class='inlinecode'>#shell</span> <span class='inlinecode'>#unix</span> <span class='inlinecode'>#programming</span> <span class='inlinecode'>#operatingsystem</span> <span class='inlinecode'>#bunnix</span> <span class='inlinecode'>#hare</span></span><br />
<br />
<a class='textlink' href='https://drewdevault.com/2023/04/18/2023-04-18-A-new-shell-for-Unix.html'>drewdevault.com/2023/04/18/2023-04-18-A-new-shell-for-Unix.html</a><br />
<br />
<h3 style='display: inline' id='what-exactly-was-the-point-of--xvar--'>What exactly was the point of [ “x$var” = ...</h3><br />
<br />
<span>What exactly was the point of [ “x$var” = “xval” ]? <span class='inlinecode'>#bash</span> <span class='inlinecode'>#shell</span> <span class='inlinecode'>#posix</span> <span class='inlinecode'>#sh</span> <span class='inlinecode'>#history</span></span><br />
<br />
<a class='textlink' href='https://www.vidarholen.net/contents/blog/?p=1035'>www.vidarholen.net/contents/blog/?p=1035</a><br />
<br />
<h3 style='display: inline' id='neat-zfs-feature-here-freebsd-which-i-'>Neat <span class='inlinecode'>#ZFS</span> feature (here <span class='inlinecode'>#FreeBSD</span>) which I ...</h3><br />
<br />
<span>Neat <span class='inlinecode'>#ZFS</span> feature (here <span class='inlinecode'>#FreeBSD</span>) which I didn&#39;t know of before: Pool snapshots, which are different to snapshots of individual data sets:</span><br />
<br />
<a class='textlink' href='https://it-notes.dragas.net/2024/07/01/enhancing-freebsd-stability-with-zfs-pool-checkpoints/'>it-notes.dragas.net/2024/07/01/enhanci..-..d-stability-with-zfs-pool-checkpoints/</a><br />
<br />
<h3 style='display: inline' id='longer-hours-help-only-short-term-about-40-'>Longer hours help only short term. About 40 ...</h3><br />
<br />
<span>Longer hours help only short term. About 40 hours <span class='inlinecode'>#productivity</span></span><br />
<br />
<a class='textlink' href='https://thesquareplanet.com/blog/about-40-hours/'>thesquareplanet.com/blog/about-40-hours/</a><br />
<br />
<h3 style='display: inline' id='you-could-also-use-bpf-instead-of-strace-'>You could also use <span class='inlinecode'>#bpf</span> instead of <span class='inlinecode'>#strace</span>, ...</h3><br />
<br />
<span>You could also use <span class='inlinecode'>#bpf</span> instead of <span class='inlinecode'>#strace</span>, albeit modern strace uses bpf if told so: How to use the new Docker Seccomp profiles</span><br />
<br />
<a class='textlink' href='https://blog.jessfraz.com/post/how-to-use-new-docker-seccomp-profiles/'>blog.jessfraz.com/post/how-to-use-new-docker-seccomp-profiles/</a><br />
<br />
<h3 style='display: inline' id='some-great-things-are-approaching-bhyve-on-'>Some great things are approaching <span class='inlinecode'>#bhyve</span> on ...</h3><br />
<br />
<span>Some great things are approaching <span class='inlinecode'>#bhyve</span> on <span class='inlinecode'>#FreeBSD</span> and VM Live Migration – Quo vadis? <span class='inlinecode'>#freebsd</span> <span class='inlinecode'>#virtualization</span> <span class='inlinecode'>#bhyve</span></span><br />
<br />
<a class='textlink' href='https://gyptazy.com/bhyve-on-freebsd-and-vm-live-migration-quo-vadis/'>gyptazy.com/bhyve-on-freebsd-and-vm-live-migration-quo-vadis/</a><br />
<br />
<h3 style='display: inline' id='another-synchronization-tool-part-of-the-'>Another synchronization tool part of the ...</h3><br />
<br />
<span>Another synchronization tool part of the <span class='inlinecode'>#golang</span> std lib, singleflight! Used to not overload external resources (like DBs) with N concurrent requests. Useful!</span><br />
<br />
<a class='textlink' href='https://victoriametrics.com/blog/go-singleflight/index.html'>victoriametrics.com/blog/go-singleflight/index.html</a><br />
<br />
<h3 style='display: inline' id='too-many-open-files-linux-'>Too many open files <span class='inlinecode'>#linux</span> ...</h3><br />
<br />
<span>Too many open files <span class='inlinecode'>#linux</span></span><br />
<br />
<a class='textlink' href='https://mattrighetti.com/2025/06/04/too-many-files-open.html'>mattrighetti.com/2025/06/04/too-many-files-open.html</a><br />
<br />
<h3 style='display: inline' id='just-posted-part-4-of-my-bash-golf-'>Just posted Part 4 of my <span class='inlinecode'>#Bash</span> <span class='inlinecode'>#Golf</span> ...</h3><br />
<br />
<span>Just posted Part 4 of my <span class='inlinecode'>#Bash</span> <span class='inlinecode'>#Golf</span> series:</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.gmi'>foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html'>foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html</a><br />
<br />
<h3 style='display: inline' id='perl-is-like-a-swiss-army-knife-as-one-of-'><span class='inlinecode'>#Perl</span> is like a swiss army knife, as one of ...</h3><br />
<br />
<span><span class='inlinecode'>#Perl</span> is like a swiss army knife, as one of the comments states:</span><br />
<br />
<a class='textlink' href='https://developers.slashdot.org/story/25/09/14/0134239/is-perl-the-worlds-10th-most-popular-programming-language'>developers.slashdot.org/story/25/09/14..-..10th-most-popular-programming-language</a><br />
<br />
<h3 style='display: inline' id='personally-mainly-working-with-colorless-'>Personally, mainly working with colorless ...</h3><br />
<br />
<span>Personally, mainly working with colorless languages like <span class='inlinecode'>#ruby</span> and <span class='inlinecode'>#golang</span>, now slowly understand the pain ppl would have w/ Rust or JS. It wasn&#39;t just me when I got confused writing that Grafana DS plugin in TypeScript...</span><br />
<br />
<a class='textlink' href='https://jpcamara.com/2024/07/15/ruby-methods-are.html'>jpcamara.com/2024/07/15/ruby-methods-are.html</a><br />
<br />
<h3 style='display: inline' id='how-do-gpus-work-usually-people-only-know-'>How do GPUs work? Usually, people only know ...</h3><br />
<br />
<span>How do GPUs work? Usually, people only know about CPUs... ... I got the gist, but <span class='inlinecode'>#gpu</span> <span class='inlinecode'>#cpu</span></span><br />
<br />
<a class='textlink' href='https://blog.codingconfessions.com/p/gpu-computing'>blog.codingconfessions.com/p/gpu-computing</a><br />
<br />
<h3 style='display: inline' id='for-unattended-upgrades-you-must-have-a-good-'>For unattended upgrades you must have a good ...</h3><br />
<br />
<span>For unattended upgrades you must have a good testing (or canary) strategy. <span class='inlinecode'>#sre</span> <span class='inlinecode'>#reliability</span> <span class='inlinecode'>#downtime</span> <span class='inlinecode'>#ubuntu</span> <span class='inlinecode'>#systemd</span> <span class='inlinecode'>#kubernetes</span></span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/why-reliability-is-hard-at-scale'>newsletter.pragmaticengineer.com/p/why-reliability-is-hard-at-scale</a><br />
<br />
<h3 style='display: inline' id='surely-in-the-age-of-ai-and-llm-people-'>Surely, in the age of <span class='inlinecode'>#AI</span> and <span class='inlinecode'>#LLM</span>, people ...</h3><br />
<br />
<span>Surely, in the age of <span class='inlinecode'>#AI</span> and <span class='inlinecode'>#LLM</span>, people are not writing as much code manually as before, but I don&#39;t think skills like using <span class='inlinecode'>#Vim</span> (or <span class='inlinecode'>#HelixEditor</span>) are obsolete just yet. You still need to understand what&#39;s happening under the hood, and being comfortable with these tools can make you much more efficient when you do need to edit or review code.</span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=tW0BSgzr2AM'>www.youtube.com/watch?v=tW0BSgzr2AM</a><br />
<br />
<h3 style='display: inline' id='on-ai-changes-everything-'>On <span class='inlinecode'>#AI</span> changes everything... ...</h3><br />
<br />
<span>On <span class='inlinecode'>#AI</span> changes everything...</span><br />
<br />
<a class='textlink' href='https://lucumr.pocoo.org/2025/6/4/changes/'>lucumr.pocoo.org/2025/6/4/changes/</a><br />
<br />
<h3 style='display: inline' id='maps-in-go-under-the-hood-golang-'>Maps in Go under the hood <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Maps in Go under the hood <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://victoriametrics.com/blog/go-map/'>victoriametrics.com/blog/go-map/</a><br />
<br />
<h3 style='display: inline' id='a-project-that-looks-complex-might-just-be-'>"A project that looks complex might just be ...</h3><br />
<br />
<span>"A project that looks complex might just be unfamiliar" - Quote from the Applied Go Weekly Newsletter</span><br />
<br />
<h3 style='display: inline' id='i-must-admit-that-partly-i-see-myself-there-'>I must admit that partly I see myself there ...</h3><br />
<br />
<span>I must admit that partly I see myself there (sometimes). But it is fun :-) <span class='inlinecode'>#tools</span> <span class='inlinecode'>#happy</span></span><br />
<br />
<a class='textlink' href='https://borretti.me/article/you-can-choose-tools-that-make-you-happy'>borretti.me/article/you-can-choose-tools-that-make-you-happy</a><br />
<br />
<h3 style='display: inline' id='makes-me-think-of-good-old-times-where-i-'>Makes me think of good old times, where I ...</h3><br />
<br />
<span>Makes me think of good old times, where I shipped 5 times as fast.: What happens when code reviews aren’t mandatory? What happens when code reviews aren’t mandatory? via @wallabagapp <span class='inlinecode'>#productivity</span> <span class='inlinecode'>#code</span></span><br />
<br />
<a class='textlink' href='https://testdouble.com/insights/when-code-reviews-arent-mandatory'>testdouble.com/insights/when-code-reviews-arent-mandatory</a><br />
<br />
<h3 style='display: inline' id='neat-little-blog-post-showcasing-various-'>Neat little blog post, showcasing various ...</h3><br />
<br />
<span>Neat little blog post, showcasing various methods used for generic programming before the introduction of generics. Only reflection wasn&#39;t listed. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://bitfieldconsulting.com/posts/generics'>bitfieldconsulting.com/posts/generics</a><br />
<br />
<h3 style='display: inline' id='share-didn-t-know-that-on-macos-besides-of-'>share Didn&#39;t know, that on MacOS, besides of ...</h3><br />
<br />
<span>share Didn&#39;t know, that on MacOS, besides of .so (shared object files, which can be dynamically loaded as well) there is also the MacOS&#39; native .dylib format which serves a similar purpose! <span class='inlinecode'>#macos</span> <span class='inlinecode'>#dylib</span> <span class='inlinecode'>#so</span></span><br />
<br />
<a class='textlink' href='https://cpu.land/becoming-an-elf-lord'>cpu.land/becoming-an-elf-lord</a><br />
<br />
<h3 style='display: inline' id='i-think-this-is-the-way-use-llms-for-code-you-'>I think this is the way: use LLMs for code you ...</h3><br />
<br />
<span>I think this is the way: use LLMs for code you don&#39;t care much about and write code manually for what matters most to you. This way, most boring and boilerplate stuff can be auto-generated.</span><br />
<br />
<a class='textlink' href='https://registerspill.thorstenball.com/p/surely-not-all-codes-worth-it'>registerspill.thorstenball.com/p/surely-not-all-codes-worth-it</a><br />
<br />
<h3 style='display: inline' id='always-enable-keepalive-i-d-say-most-of-the-'>Always enable keepalive? I&#39;d say most of the ...</h3><br />
<br />
<span>Always enable keepalive? I&#39;d say most of the time. I&#39;ve seen cases, where connections weren&#39;t reused but new additional were edtablished, causing the servers to run out of worker threads <span class='inlinecode'>#sre</span> Always. Enable. Keepalives.</span><br />
<br />
<a class='textlink' href='https://www.honeycomb.io/blog/always-enable-keepalives'>www.honeycomb.io/blog/always-enable-keepalives</a><br />
<br />
<h3 style='display: inline' id='i-just-finished-reading-chaos-engineering-by-'>I just finished reading "Chaos Engineering" by ...</h3><br />
<br />
<span>I just finished reading "Chaos Engineering" by Casey Rosenthal—an absolute must-read for anyone passionate about building resilient systems! Chaos Engineering is not abbreaking things randomly—it&#39;s a disciplined approach to uncovering weaknesses before they become outages. SREs, this book is packed with practical insights and real-world strategies to strengthen your systems against failure. Highly recommended! <span class='inlinecode'>#ChaosEngineering</span> <span class='inlinecode'>#Resilience</span></span><br />
<br />
<a class='textlink' href='https://www.oreilly.com/library/view/chaos-engineering/9781492043850/'>www.oreilly.com/library/view/chaos-engineering/9781492043850/</a><br />
<br />
<h3 style='display: inline' id='fx-is-a-neat-and-tidy-command-line-tool-for-'>fx is a neat and tidy command-line tool for ...</h3><br />
<br />
<span>fx is a neat and tidy command-line tool for interactively viewing JSON files! What I like about it is that it is not too complex (open the help with ? and it is only about one page long) but still very useful. <span class='inlinecode'>#json</span> <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://github.com/antonmedv/fx'>github.com/antonmedv/fx</a><br />
<br />
<h3 style='display: inline' id='some-nice-golang-tricks-there-'>Some nice <span class='inlinecode'>#Golang</span> tricks there ...</h3><br />
<br />
<span>Some nice <span class='inlinecode'>#Golang</span> tricks there</span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/12-personal-go-tricks-that-transformed'>blog.devtrovert.com/p/12-personal-go-tricks-that-transformed</a><br />
<br />
<h2 style='display: inline' id='october-2025'>October 2025</h2><br />
<br />
<h3 style='display: inline' id='word-what-are-we-losing-with-ai-llm-ai-'>Word! What Are We Losing With AI? <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> ...</h3><br />
<br />
<span>Word! What Are We Losing With AI? <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span></span><br />
<br />
<a class='textlink' href='https://josem.co/what-are-we-losing-with-ai/'>josem.co/what-are-we-losing-with-ai/</a><br />
<br />
<h3 style='display: inline' id='it-s-not-yet-time-for-the-friday-fun-but-'>It&#39;s not yet time for the friday <span class='inlinecode'>#fun</span>, but: ...</h3><br />
<br />
<span>It&#39;s not yet time for the friday <span class='inlinecode'>#fun</span>, but: OpenOffice does not print on Tuesdays ― Andreas Zwinkau :-)</span><br />
<br />
<a class='textlink' href='https://beza1e1.tuxen.de/lore/print_on_tuesday.html'>beza1e1.tuxen.de/lore/print_on_tuesday.html</a><br />
<br />
<h3 style='display: inline' id='finally-i-retired-my-awsecs-setup-for-my-'>Finally, I retired my AWS/ECS setup for my ...</h3><br />
<br />
<span>Finally, I retired my AWS/ECS setup for my self-hosted apps, as it was too expensive to operate—I had to pay $20 monthly just to run pods for only a day or so each month, so I rarely used them. Now, everything has been migrated to my FreeBSD-powered Kubernetes home cluster! Part 7 of this blog series covers the initial pod deployments. <span class='inlinecode'>#freebsd</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#selfhosing</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.gmi'>foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html</a><br />
<br />
<h3 style='display: inline' id='a-great-blog-post-about-my-favourite-text-'>A great blog post about my favourite text ...</h3><br />
<br />
<span>A great blog post about my favourite text editor. why even helix? <span class='inlinecode'>#HeliEditor</span> Now I am considering forking it myself as well :-)</span><br />
<br />
<a class='textlink' href='https://axlefublr.github.io/why-even-helix/'>axlefublr.github.io/why-even-helix/</a><br />
<br />
<h3 style='display: inline' id='one-of-the-more-confusing-parts-in-go-nil-'>One of the more confusing parts in Go, nil ...</h3><br />
<br />
<span>One of the more confusing parts in Go, nil values vs nil errors: <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://unexpected-go.com/nil-errors-that-are-non-nil-errors.html'>unexpected-go.com/nil-errors-that-are-non-nil-errors.html</a><br />
<br />
<h3 style='display: inline' id='strong-engineers-are-pragmatic-work-fast-have-'>Strong engineers are pragmatic, work fast, have ...</h3><br />
<br />
<span>Strong engineers are pragmatic, work fast, have technical ability, dont need to be technical geniuses and believe in their ability to solve almost any problem <span class='inlinecode'>#productivity</span></span><br />
<br />
<a class='textlink' href='https://www.seangoedecke.com/what-makes-strong-engineers-strong/'>www.seangoedecke.com/what-makes-strong-engineers-strong/</a><br />
<br />
<h3 style='display: inline' id='i-am-currently-binge-listening-to-the-google-'>I am currently binge-listening to the Google ...</h3><br />
<br />
<span>I am currently binge-listening to the Google <span class='inlinecode'>#SRE</span> ProdCast. It&#39;s really great to learn about the stories of individual SREs and their journeys. It is not just about SREs at Google; there are also external guests.</span><br />
<br />
<a class='textlink' href='https://sre.google/prodcast/'>sre.google/prodcast/</a><br />
<br />
<h3 style='display: inline' id='looks-like-a-neat-library-for-writing-'>Looks like a neat library for writing ...</h3><br />
<br />
<span>Looks like a neat library for writing script-a-like programs in <span class='inlinecode'>#Golang</span>. But honestly, why not directly use a scripting language like <span class='inlinecode'>#RakuLang</span> or <span class='inlinecode'>#Ruby</span></span><br />
<br />
<a class='textlink' href='https://github.com/bitfield/script'>github.com/bitfield/script</a><br />
<br />
<h3 style='display: inline' id='where-gen-ai-shines-is-the-generation-and-'>Where Gen AI shines is the generation and ...</h3><br />
<br />
<span>Where Gen AI shines is the generation and management of YAML files... e.g. Kubernetes manifests. Who likes to write YAML files by hand? <span class='inlinecode'>#genai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#yaml</span> <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#k8s</span></span><br />
<br />
<h3 style='display: inline' id='at-work-everybody-is-replacable-some-with-a-'>At work, everybody is replacable. Some with a ...</h3><br />
<br />
<span>At work, everybody is replacable. Some with a hic-up, others with none. There will always someone to step up after you leave.</span><br />
<br />
<a class='textlink' href='https://adamstacoviak.com/im-a-cog/'>adamstacoviak.com/im-a-cog/</a><br />
<br />
<h3 style='display: inline' id='i-actually-would-switch-back-to-freebsd-as-'>I actually would switch back to <span class='inlinecode'>#FreeBSD</span> as ...</h3><br />
<br />
<span>I actually would switch back to <span class='inlinecode'>#FreeBSD</span> as my main Operating System for personal use on my Laptop - FreeBSD used to be my main driver a couple of years ago when I still used "normal" PCs</span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/140841/freebsd-to-invest-in-laptop-support/'>www.osnews.com/story/140841/freebsd-to-invest-in-laptop-support/</a><br />
<br />
<h3 style='display: inline' id='amazing-print-is-amazing-'>Amazing Print is amazing ...</h3><br />
<br />
<span>Amazing Print is amazing</span><br />
<br />
<a class='textlink' href='https://github.com/amazing-print/amazing_print'>github.com/amazing-print/amazing_print</a><br />
<br />
<h3 style='display: inline' id='always-worth-a-reminde-what-are-bloom-filters-'>Always worth a reminde, what are bloom filters ...</h3><br />
<br />
<span>Always worth a reminde, what are bloom filters and how do they work? <span class='inlinecode'>#bloom</span> <span class='inlinecode'>#bloomfilter</span> <span class='inlinecode'>#datastructure</span></span><br />
<br />
<a class='textlink' href='https://micahkepe.com/blog/bloom-filters/'>micahkepe.com/blog/bloom-filters/</a><br />
<br />
<h3 style='display: inline' id='some-ruby-book-notes-of-mine-'>Some <span class='inlinecode'>#Ruby</span> book notes of mine: ...</h3><br />
<br />
<span>Some <span class='inlinecode'>#Ruby</span> book notes of mine:</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.gmi'>foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html'>foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html</a><br />
<br />
<h3 style='display: inline' id='sad-story-work-scrum-jira-'>Sad story. <span class='inlinecode'>#work</span> <span class='inlinecode'>#scrum</span> <span class='inlinecode'>#jira</span> ...</h3><br />
<br />
<span>Sad story. <span class='inlinecode'>#work</span> <span class='inlinecode'>#scrum</span> <span class='inlinecode'>#jira</span></span><br />
<br />
<a class='textlink' href='https://lambdaland.org/posts/2023-02-21_metric_worship/'>lambdaland.org/posts/2023-02-21_metric_worship/</a><br />
<br />
<h3 style='display: inline' id='one-of-my-favorite-books-some-thoughts-on-'>One of my favorite books: "Some Thoughts on ...</h3><br />
<br />
<span>One of my favorite books: "Some Thoughts on Deep Work"</span><br />
<br />
<a class='textlink' href='https://atthis.link/blog/2020/deepwork.html'>atthis.link/blog/2020/deepwork.html</a><br />
<br />
<h3 style='display: inline' id='ltex-ls-is-great-for-integrating-'>ltex-ls is great for integrating ...</h3><br />
<br />
<span>ltex-ls is great for integrating <span class='inlinecode'>#LanguageTool</span> prose checking via <span class='inlinecode'>#LSP</span> into your <span class='inlinecode'>#HelixEditor</span>! ... There is also vale-ls, which I have enabled as well. I just download ltex-ls and configure it as an LSP for your .txt and .md docs... that&#39;s it!</span><br />
<br />
<a class='textlink' href='https://valentjn.github.io/ltex/'>valentjn.github.io/ltex/</a><br />
<br />
<h3 style='display: inline' id='supernote-tool-is-awesome-as-i-can-now-'>supernote-tool is awesome, as I can now ...</h3><br />
<br />
<span>supernote-tool is awesome, as I can now download my Supernote notes on my <span class='inlinecode'>#Linux</span> desktop and convert them into PDFs - enables me to use the Supernote Nomad device as mine completely offline!</span><br />
<br />
<h3 style='display: inline' id='fun-story---the-case-of-the-500-mile-email-'>Fun story! :-) The case of the 500-mile email ...</h3><br />
<br />
<span>Fun story! :-) The case of the 500-mile email ― Andreas Zwinkau via @wallabagapp <span class='inlinecode'>#unix</span> <span class='inlinecode'>#sunos</span> <span class='inlinecode'>#sendmail</span></span><br />
<br />
<a class='textlink' href='https://beza1e1.tuxen.de/lore/500mile_email.html'>beza1e1.tuxen.de/lore/500mile_email.html</a><br />
<br />
<h3 style='display: inline' id='operating-myself-some-software-over-10-years-of-'>Operating myself some software over 10 years of ...</h3><br />
<br />
<span>Operating myself some software over 10 years of age for over 10 years now, this podcast really resonated with me: <span class='inlinecode'>#podcast</span> <span class='inlinecode'>#software</span> <span class='inlinecode'>#maintainability</span> <span class='inlinecode'>#maintenance</span></span><br />
<br />
<a class='textlink' href='https://changelog.com/podcast/627'>changelog.com/podcast/627</a><br />
<br />
<h3 style='display: inline' id='git-worktrees-are-awesome-'><span class='inlinecode'>#git</span> worktrees are awesome! ...</h3><br />
<br />
<span><span class='inlinecode'>#git</span> worktrees are awesome!</span><br />
<br />
<h3 style='display: inline' id='llms-for-anomaly-detection-while-some-'>LLMs for anomaly detection? "While some ...</h3><br />
<br />
<span>LLMs for anomaly detection? "While some ML-powered monitoring features have their place, good old-fashioned standard statistics remain hard to beat" Lessons from the pre-LLM AI in Observability: Anomaly Detection and AI-Ops vs. P99 | <span class='inlinecode'>#llm</span> <span class='inlinecode'>#monitoring</span></span><br />
<br />
<a class='textlink' href='https://quesma.com/blog-detail/aiops-observability'>quesma.com/blog-detail/aiops-observability</a><br />
<br />
<h3 style='display: inline' id='after-having-heavily-vibe-coded-personal-pet-'>After having heavily vibe-coded (personal pet ...</h3><br />
<br />
<span>After having heavily vibe-coded (personal pet projects) for 2 months other the summer, I&#39;ve come back to more structured and intentional AI coding practices. Surly, it was a great learnig experiment: <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#risk</span> <span class='inlinecode'>#code</span> <span class='inlinecode'>#sre</span> <span class='inlinecode'>#development</span> <span class='inlinecode'>#genai</span></span><br />
<br />
<a class='textlink' href='https://www.okoone.com/spark/technology-innovation/how-ai-generated-code-is-quietly-increasing-system-risk/'>www.okoone.com/spark/technology-innova..-..ode-is-quietly-increasing-system-risk/</a><br />
<br />
<h3 style='display: inline' id='slowly-one-after-another-i-am-switching-all-'>Slowly, one after another, I am switching all ...</h3><br />
<br />
<span>Slowly, one after another, I am switching all my Go projects to Mage. Having a Makefile or Taskfile in a native Go format is so much better.</span><br />
<br />
<a class='textlink' href='https://magefile.org/'>magefile.org/</a><br />
<br />
<h3 style='display: inline' id='some-neat-slice-tricks-for-go-golang-'>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/12-slice-tricks-to-enhance-your-go'>blog.devtrovert.com/p/12-slice-tricks-to-enhance-your-go</a><br />
<br />
<h3 style='display: inline' id='i-spent-way-too-much-time-on-this-site-it-s-'>I spent way too much time on this site. It&#39;s ...</h3><br />
<br />
<span>I spent way too much time on this site. It&#39;s full of tools for the <span class='inlinecode'>#terminal</span>! Terminal Trove - The $HOME of all things in the terminal. <span class='inlinecode'>#linux</span> <span class='inlinecode'>#bsd</span> <span class='inlinecode'>#unix</span> <span class='inlinecode'>#terminal</span> <span class='inlinecode'>#cli</span> <span class='inlinecode'>#tools</span></span><br />
<br />
<a class='textlink' href='https://terminaltrove.com/'>terminaltrove.com/</a><br />
<br />
<h3 style='display: inline' id='i-share-similar-experiences-with-rust-but-i-'>I share similar experiences with <span class='inlinecode'>#rust</span>, but I ...</h3><br />
<br />
<span>I share similar experiences with <span class='inlinecode'>#rust</span>, but I am sure one just needs a bit more time to feel productive in it. It&#39;s not enough just to try rust out once before becoming fluent in it.</span><br />
<br />
<a class='textlink' href='https://m.slashdot.org/story/446164'>m.slashdot.org/story/446164</a><br />
<br />
<h3 style='display: inline' id='pipelines-in-go-using-channels-golang-'>Pipelines in Go using channels. <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Pipelines in Go using channels. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://go.dev/blog/pipelines'>go.dev/blog/pipelines</a><br />
<br />
<h3 style='display: inline' id='some-nifty-ruby-tricks-in-my-opinion-ruby-'>Some nifty <span class='inlinecode'>#Ruby</span> tricks: In my opinion, Ruby ...</h3><br />
<br />
<span>Some nifty <span class='inlinecode'>#Ruby</span> tricks: In my opinion, Ruby is unterrated. It&#39;s a great language even without Rails.</span><br />
<br />
<a class='textlink' href='http://www.rubyinside.com/21-ruby-tricks-902.html'>www.rubyinside.com/21-ruby-tricks-902.html</a><br />
<br />
<h3 style='display: inline' id='reflects-my-experience-'>Reflects my experience ...</h3><br />
<br />
<span>Reflects my experience</span><br />
<br />
<a class='textlink' href='https://simonwillison.net/2025/Sep/12/matt-webb/#atom-everything'>simonwillison.net/2025/Sep/12/matt-webb/#atom-everything</a><br />
<br />
<h3 style='display: inline' id='i-like-the-fact-that-markdown-fikes-a-rcs-an-'>I like the fact that Markdown fikes, a RCS. an ...</h3><br />
<br />
<span>I like the fact that Markdown fikes, a RCS. an text editor and standard unix tools like <span class='inlinecode'>#grep</span> and <span class='inlinecode'>#find</span> are all you need for taking notes digitally. I am the same :-) My favorite note-taking method</span><br />
<br />
<a class='textlink' href='https://unixdigest.com/articles/my-favorite-note-taking-method.html'>unixdigest.com/articles/my-favorite-note-taking-method.html</a><br />
<br />
<h3 style='display: inline' id='rich-interactive-widgets-for-terminal-uis-it-'>Rich Interactive Widgets for Terminal UIs, it ...</h3><br />
<br />
<span>Rich Interactive Widgets for Terminal UIs, it must not always be BubbleTea <span class='inlinecode'>#golang</span> <span class='inlinecode'>#terminal</span> <span class='inlinecode'>#widgets</span></span><br />
<br />
<a class='textlink' href='https://github.com/rivo/tview'>github.com/rivo/tview</a><br />
<br />
<h3 style='display: inline' id='always-fun-to-dig-in-the-perl-perl-woods-'>Always fun to dig in the <span class='inlinecode'>#Perl</span> @Perl woods. ...</h3><br />
<br />
<span>Always fun to dig in the <span class='inlinecode'>#Perl</span> @Perl woods. Now, no more Perl 4 pseudo multi-dimensional hashes in Perl 5 (well, they are still there when you require an older version for compatibility via use flag, though)! :-)</span><br />
<br />
<a class='textlink' href='https://www.effectiveperlprogramming.com/2024/11/goodbye-fake-multidimensional-data-structures/'>www.effectiveperlprogramming.com/2024/..-..fake-multidimensional-data-structures/</a><br />
<br />
<h3 style='display: inline' id='how-does-virtual-memory-work-ram-'>How does <span class='inlinecode'>#virtual</span> <span class='inlinecode'>#memory</span> work? <span class='inlinecode'>#ram</span> ...</h3><br />
<br />
<span>How does <span class='inlinecode'>#virtual</span> <span class='inlinecode'>#memory</span> work? <span class='inlinecode'>#ram</span></span><br />
<br />
<a class='textlink' href='https://drewdevault.com/2018/10/29/How-does-virtual-memory-work.html'>drewdevault.com/2018/10/29/How-does-virtual-memory-work.html</a><br />
<br />
<h3 style='display: inline' id='flamelens---an-interactive-flamegraph-viewer-in-'>flamelens - An interactive flamegraph viewer in ...</h3><br />
<br />
<span>flamelens - An interactive flamegraph viewer in the terminal. - Terminal Trove</span><br />
<br />
<a class='textlink' href='https://terminaltrove.com/flamelens/'>terminaltrove.com/flamelens/</a><br />
<br />
<h3 style='display: inline' id='you-can-now-run-ansible-playbooks-and-shell-'>You can now run Ansible Playbooks and shell ...</h3><br />
<br />
<span>You can now run Ansible Playbooks and shell scripts from your Terraform more easily <span class='inlinecode'>#ansible</span> <span class='inlinecode'>#terraform</span> <span class='inlinecode'>#iac</span></span><br />
<br />
<a class='textlink' href='https://danielmschmidt.de/posts/2025-09-26-terraform-actions-introduction/'>danielmschmidt.de/posts/2025-09-26-terraform-actions-introduction/</a><br />
<br />
<h3 style='display: inline' id='for-people-working-with-k8s-this-tool-is-'>For people working with <span class='inlinecode'>#k8s</span>, this tool is ...</h3><br />
<br />
<span>For people working with <span class='inlinecode'>#k8s</span>, this tool is useful. It lets you fuzzy find different k8s resource types and read a description about them: <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#fuzzy</span> <span class='inlinecode'>#cli</span> <span class='inlinecode'>#tools</span> <span class='inlinecode'>#devops</span></span><br />
<br />
<a class='textlink' href='https://github.com/keisku/kubectl-explore'>github.com/keisku/kubectl-explore</a><br />
<br />
<h2 style='display: inline' id='november-2025'>November 2025</h2><br />
<br />
<h3 style='display: inline' id='yes-using-the-right-tool-for-the-job-and-'>Yes, using the right <span class='inlinecode'>#tool</span> for the job and ...</h3><br />
<br />
<span>Yes, using the right <span class='inlinecode'>#tool</span> for the job and also learn along the way!</span><br />
<br />
<a class='textlink' href='https://drewdevault.com/2016/09/17/Use-the-right-tool.html'>drewdevault.com/2016/09/17/Use-the-right-tool.html</a><br />
<br />
<h3 style='display: inline' id='some-neat-go-tricks-golang-'>Some neat Go tricks: <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Some neat Go tricks: <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://harrisoncramer.me/15-go-sublteties-you-may-not-already-know/'>harrisoncramer.me/15-go-sublteties-you-may-not-already-know/</a><br />
<br />
<h3 style='display: inline' id='there-are-some-truths-in-this-sre-article-'>There are some truths in this <span class='inlinecode'>#SRE</span> article: ...</h3><br />
<br />
<span>There are some truths in this <span class='inlinecode'>#SRE</span> article: However, in my opinion, the more experience you have, the more you are expected to be able to resolve issues. So you can&#39;t always fallback to others. New starters are treated differently, of course. <span class='inlinecode'>#oncall</span></span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/what-i-tell-people-new-to-oncall/.'>ntietz.com/blog/what-i-tell-people-new-to-oncall/.</a><br />
<br />
<h3 style='display: inline' id='the-go-flight-recorder-is-a-tool-that-allows-'>The Go flight recorder is a tool that allows ...</h3><br />
<br />
<span>The Go flight recorder is a tool that allows developers to capture and analyze the execution of Go programs. It provides insights into performance, memory usage, and other runtime characteristics by recording events and metrics during the program&#39;s execution. Yet another tool why Go is awesome! <span class='inlinecode'>#go</span> <span class='inlinecode'>#golang</span> <span class='inlinecode'>#tools</span></span><br />
<br />
<a class='textlink' href='https://go.dev/blog/flight-recorder'>go.dev/blog/flight-recorder</a><br />
<br />
<h3 style='display: inline' id='this-is-useful-golang-'>This is useful <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>This is useful <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://antonz.org/chans/'>antonz.org/chans/</a><br />
<br />
<h3 style='display: inline' id='great-visually-animated-guide-how-raft-'>Great visually animated guide how <span class='inlinecode'>#raft</span> ...</h3><br />
<br />
<span>Great visually animated guide how <span class='inlinecode'>#raft</span> <span class='inlinecode'>#consensus</span> works</span><br />
<br />
<a class='textlink' href='http://thesecretlivesofdata.com/raft/'>thesecretlivesofdata.com/raft/</a><br />
<br />
<h3 style='display: inline' id='todays-junior-devs-who-skip-the-hard-'>"Today’s junior devs who skip the “hard ...</h3><br />
<br />
<span>"Today’s junior devs who skip the “hard way” may plateau early, lacking the depth to grow into senior engineers tomorrow." ... Avoiding Skill Atrophy in the Age of AI</span><br />
<br />
<a class='textlink' href='https://addyo.substack.com/p/avoiding-skill-atrophy-in-the-age'>addyo.substack.com/p/avoiding-skill-atrophy-in-the-age</a><br />
<br />
<h3 style='display: inline' id='i-actually-enjoyed-readong-through-the-fish-'>I actually enjoyed readong through the <span class='inlinecode'>#Fish</span> ...</h3><br />
<br />
<span>I actually enjoyed readong through the <span class='inlinecode'>#Fish</span> <span class='inlinecode'>#shell</span> docs It&#39;s much cleaner than posix shells</span><br />
<br />
<a class='textlink' href='https://fishshell.com/docs/current/language.html'>fishshell.com/docs/current/language.html</a><br />
<br />
<h3 style='display: inline' id='there-can-be-many-things-which-can-go-wrong-'>There can be many things which can go wrong, ...</h3><br />
<br />
<span>There can be many things which can go wrong, more than mentioned here: <span class='inlinecode'>#linux</span></span><br />
<br />
<a class='textlink' href='https://notes.eatonphil.com/2025-03-27-things-that-go-wrong-with-disk-io.html'>notes.eatonphil.com/2025-03-27-things-that-go-wrong-with-disk-io.html</a><br />
<br />
<h3 style='display: inline' id='imho-motivation-is-not-always-enough-there-'>IMHO, motivation is not always enough. There ...</h3><br />
<br />
<span>IMHO, motivation is not always enough. There must also be some discipline. That helps then theres only a little or no motivation</span><br />
<br />
<a class='textlink' href='https://world.hey.com/jason/motivation-50ab8280'>world.hey.com/jason/motivation-50ab8280</a><br />
<br />
<h3 style='display: inline' id='have-been-generating-those-cpu-flame-graphs-on-'>Have been generating those CPU flame graphs on ...</h3><br />
<br />
<span>Have been generating those CPU flame graphs on bare metal, so being able to use them in k8s seems to be pretty useful to me. <span class='inlinecode'>#flamegraphs</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#kubernetes</span></span><br />
<br />
<a class='textlink' href='https://www.percona.com/blog/kubernetes-observability-code-profiling-with-flame-graphs/'>www.percona.com/blog/kubernetes-observability-code-profiling-with-flame-graphs/</a><br />
<br />
<h3 style='display: inline' id='i-personally-don-t-like-the-typical-whiteboard-'>I personally don&#39;t like the typical whiteboard ...</h3><br />
<br />
<span>I personally don&#39;t like the typical whiteboard coding exercises, nor do I think LeetCode is the answer. It&#39;s impossible to assess the skills of a candidate with a few interviews but it is possible to filter out the bad ones. The aim is to get an idea about the candidate and be positive about their potential. <span class='inlinecode'>#interview</span> <span class='inlinecode'>#interviewing</span> <span class='inlinecode'>#hiring</span></span><br />
<br />
<a class='textlink' href='https://danielabaron.me/blog/reimagining-technical-interviews/'>danielabaron.me/blog/reimagining-technical-interviews/</a><br />
<br />
<h3 style='display: inline' id='if-you-ve-wondered-how-cpus-and-operating-'>If you&#39;ve wondered how CPUs and operating ...</h3><br />
<br />
<span>If you&#39;ve wondered how CPUs and operating systems generally work and want the basics explained in an easily digestible format without going to college, have a look at CPU.land. I had a lot of fun reading it! <span class='inlinecode'>#CPU</span></span><br />
<br />
<a class='textlink' href='https://cpu.land'>cpu.land</a><br />
<br />
<h3 style='display: inline' id='and-there-s-an-unexpected-winner---erlang-'>And there&#39;s an unexpected winner :-) <span class='inlinecode'>#erlang</span> ...</h3><br />
<br />
<span>And there&#39;s an unexpected winner :-) <span class='inlinecode'>#erlang</span> <span class='inlinecode'>#architecture</span></span><br />
<br />
<a class='textlink' href='https://freedium.cfd/https://medium.com/@codeperfect/we-tested-7-languages-under-extreme-load-and-only-one-didnt-crash-it-wasn-t-what-we-expected-67f84c79dc34'>freedium.cfd/https://medium.com/@codep..-..t-wasn-t-what-we-expected-67f84c79dc34</a><br />
<br />
<h3 style='display: inline' id='is-it-it-this-is-it-what-is-it-in-ruby-34-'>Is it it? This is it. What Is It (in Ruby 3.4)? ...</h3><br />
<br />
<span>Is it it? This is it. What Is It (in Ruby 3.4)? <span class='inlinecode'>#ruby</span></span><br />
<br />
<a class='textlink' href='https://kevinjmurphy.com/posts/what-is-it-in-ruby-34/'>kevinjmurphy.com/posts/what-is-it-in-ruby-34/</a><br />
<br />
<h3 style='display: inline' id='from-my-recent-london-trip-i-ve-uploaded-'>From my recent <span class='inlinecode'>#London</span> trip, I&#39;ve uploaded ...</h3><br />
<br />
<span>From my recent <span class='inlinecode'>#London</span> trip, I&#39;ve uploaded some new Street Photography photos to my photo site All photos were post-processed using Open-Source software including <span class='inlinecode'>#Darktable</span> and <span class='inlinecode'>#Shotwell</span>. The site itself was generated with a simple <span class='inlinecode'>#bash</span> script! Not all photos are from London, just the recent additions were.</span><br />
<br />
<a class='textlink' href='https://irregular.ninja!'>irregular.ninja!</a><br />
<br />
<h3 style='display: inline' id='agreed-you-should-make-your-own-programming-'>Agreed, you should make your own programming ...</h3><br />
<br />
<span>Agreed, you should make your own programming language, even if it&#39;s only for the sake of learning. I also did so over a decade ago. Mine was called Fype - "For Your Program Execution"</span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/you-should-make-a-new-terrible-programming-language/'>ntietz.com/blog/you-should-make-a-new-terrible-programming-language/</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2010-05-09-the-fype-programming-language.gmi'>foo.zone/gemfeed/2010-05-09-the-fype-programming-language.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2010-05-09-the-fype-programming-language.html'>foo.zone/gemfeed/2010-05-09-the-fype-programming-language.html</a><br />
<br />
<h3 style='display: inline' id='principles-for-c-programming-c-'>Principles for C programming <span class='inlinecode'>#C</span> ...</h3><br />
<br />
<span>Principles for C programming <span class='inlinecode'>#C</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://drewdevault.com/2017/03/15/How-I-learned-to-stop-worrying-and-love-C.html'>drewdevault.com/2017/03/15/How-I-learned-to-stop-worrying-and-love-C.html</a><br />
<br />
<h3 style='display: inline' id='typst-appears-to-be-a-great-modern-'><span class='inlinecode'>#Typst</span> appears to be a great modern ...</h3><br />
<br />
<span><span class='inlinecode'>#Typst</span> appears to be a great modern alternative to <span class='inlinecode'>#LaTeX</span></span><br />
<br />
<h3 style='display: inline' id='things-you-can-do-with-a-debugger-but-not-with-'>Things you can do with a debugger but not with ...</h3><br />
<br />
<span>Things you can do with a debugger but not with print debugging <span class='inlinecode'>#debugger</span> <span class='inlinecode'>#debugging</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://mahesh-hegde.github.io/posts/what_debugger_can/'>mahesh-hegde.github.io/posts/what_debugger_can/</a><br />
<br />
<h3 style='display: inline' id='neat-tutorial-i-think-i-ve-to-try-jujutsu-'>Neat tutorial, I think I&#39;ve to try <span class='inlinecode'>#jujutsu</span> ...</h3><br />
<br />
<span>Neat tutorial, I think I&#39;ve to try <span class='inlinecode'>#jujutsu</span> out now! <span class='inlinecode'>#git</span> <span class='inlinecode'>#vcs</span> <span class='inlinecode'>#jujutsu</span> <span class='inlinecode'>#jj</span></span><br />
<br />
<a class='textlink' href='https://www.stavros.io/posts/switch-to-jujutsu-already-a-tutorial/'>www.stavros.io/posts/switch-to-jujutsu-already-a-tutorial/</a><br />
<br />
<h3 style='display: inline' id='wise-words-best-practices-are-not-rules-they-'>Wise words Best practices are not rules. They ...</h3><br />
<br />
<span>Wise words Best practices are not rules. They are guidelines that help you make better decisions. They are not absolute truths, but rather suggestions based on experience and common sense. You should always use your own judgment and adapt them to your specific situation.</span><br />
<br />
<a class='textlink' href='https://www.arp242.net/best-practices.html'>www.arp242.net/best-practices.html</a><br />
<br />
<h3 style='display: inline' id='how-to-build-a-linux-container-from-'>How to build a <span class='inlinecode'>#Linux</span> <span class='inlinecode'>#Container</span> from ...</h3><br />
<br />
<span>How to build a <span class='inlinecode'>#Linux</span> <span class='inlinecode'>#Container</span> from scratch without <span class='inlinecode'>#Docker</span>, <span class='inlinecode'>#Podman</span>, etc. <span class='inlinecode'>#Linux</span> <span class='inlinecode'>#container</span> from scratch</span><br />
<br />
<a class='textlink' href='https://michalpitr.substack.com/p/linux-container-from-scratch?r=gt6tv&amp;triedRedirect=true'>michalpitr.substack.com/p/linux-contai..-..rom-scratch?r=gt6tv&amp;triedRedirect=true</a><br />
<br />
<h3 style='display: inline' id='when-i-reach-the-point-where-i-am-trying-to-'>When I reach the point where I am trying to ...</h3><br />
<br />
<span>When I reach the point where I am trying to recover from panics in Go, something else has already gone wrong with the design of the codebase, IMHO. However, I must admit that my viewpoint may be flawed, as I code small, self-contained tools and rely on as few dependencies as possible. So I rarely rely on 3rd party libs, which may panic (which wouldn’t be nice to begin with; it would be better if they returned errors). <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/go-panic-and-recover-dont-make-these'>blog.devtrovert.com/p/go-panic-and-recover-dont-make-these</a><br />
<br />
<h3 style='display: inline' id='personally-one-of-the-main-benefits-of-using-'>Personally one of the main benefits of using ...</h3><br />
<br />
<span>Personally one of the main benefits of using <span class='inlinecode'>#tmux</span> over other solutions is, that I can use the same setup on my personal devices (Linux and BSD) and for work (<span class='inlinecode'>#macOS</span>): you might not need tmux</span><br />
<br />
<a class='textlink' href='https://bower.sh/you-might-not-need-tmux'>bower.sh/you-might-not-need-tmux</a><br />
<br />
<h2 style='display: inline' id='december-2025'>December 2025</h2><br />
<br />
<h3 style='display: inline' id='rhese-are-some-nice-ruby-tricks-ruby-is-onw-'>Rhese are some nice <span class='inlinecode'>#Ruby</span> tricks (Ruby is onw ...</h3><br />
<br />
<span>Rhese are some nice <span class='inlinecode'>#Ruby</span> tricks (Ruby is onw of my favourite languages) 11 Ruby Tricks You Haven’t Seen Before via @wallabagapp</span><br />
<br />
<a class='textlink' href='https://www.rubyguides.com/2016/01/ruby-tricks/'>www.rubyguides.com/2016/01/ruby-tricks/</a><br />
<br />
<h3 style='display: inline' id='that-s-fun-use-the-c-preprocessor-as-a-html-'>That&#39;s fun, use the C preprocessor as a HTML ...</h3><br />
<br />
<span>That&#39;s fun, use the C preprocessor as a HTML template engine! <span class='inlinecode'>#c</span> <span class='inlinecode'>#cpp</span> <span class='inlinecode'>#fun</span></span><br />
<br />
<a class='textlink' href='https://wheybags.com/blog/macroblog.html'>wheybags.com/blog/macroblog.html</a><br />
<br />
<h3 style='display: inline' id='jq-but-for-markdown-thats-interesting-'><span class='inlinecode'>#jq</span> but for <span class='inlinecode'>#Markdown</span>? Thats interesting, ...</h3><br />
<br />
<span><span class='inlinecode'>#jq</span> but for <span class='inlinecode'>#Markdown</span>? Thats interesting, never thought of that. mdq: jq for Markdown via @wallabagapp</span><br />
<br />
<a class='textlink' href='https://github.com/yshavit/mdq'>github.com/yshavit/mdq</a><br />
<br />
<h3 style='display: inline' id='elvish-seems-to-be-a-neat-little-shell-it-s-'>Elvish seems to be a neat little shell. It&#39;s ...</h3><br />
<br />
<span>Elvish seems to be a neat little shell. It&#39;s implemented in <span class='inlinecode'>#Golang</span> and can make use of the great Go standard library. The language is more modern than other shells out there (e.g., supporting nested data structures) and eliminates backward compatibility issues (e.g., awkward string parsing with spaces that often causes problems in traditional shells). Elvish also comes with some neat interactive TUI elements. Furthermore, there will be a whole TUI framework built directly into the shell. If I weren&#39;t so deeply intertwined with <span class='inlinecode'>#bash</span> and <span class='inlinecode'>#zsh</span>, I would personally give <span class='inlinecode'>#Elvish</span> a try... Interesting, at least, it is.</span><br />
<br />
<a class='textlink' href='https://elv.sh/'>elv.sh/</a><br />
<br />
<h3 style='display: inline' id='google-sre-required-better-wifi-on-the-'>Google <span class='inlinecode'>#SRE</span> required better Wifi on the ...</h3><br />
<br />
<span>Google <span class='inlinecode'>#SRE</span> required better Wifi on the toilet, otherwise YouTube could go down :-)</span><br />
<br />
<a class='textlink' href='https://podcasts.apple.com/us/podcast/incident-response-with-sarah-butt-and-vrai-stacey/id1615778073?i=1000672365156'>podcasts.apple.com/us/podcast/incident..-..ai-stacey/id1615778073?i=1000672365156</a><br />
<br />
<h3 style='display: inline' id='indeed-'>Indeed ...</h3><br />
<br />
<span>Indeed</span><br />
<br />
<a class='textlink' href='https://aaronfrancis.com/2024/because-i-wanted-to-12c5137c'>aaronfrancis.com/2024/because-i-wanted-to-12c5137c</a><br />
<br />
<h3 style='display: inline' id='very-interesting-post-how-pods-are-scheduled-'>Very interesting post how pods are scheduled ...</h3><br />
<br />
<span>Very interesting post how pods are scheduled and terminated with some tips how to improve reliability (pods may be terminated before ingress rules are updated and some traffic may hits non existing pods) <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#kubernetes</span></span><br />
<br />
<a class='textlink' href='https://learnk8s.io/graceful-shutdown'>learnk8s.io/graceful-shutdown</a><br />
<br />
<h3 style='display: inline' id='i-have-added-observability-to-the-kubernetes-'>I have added observability to the <span class='inlinecode'>#Kubernetes</span> ...</h3><br />
<br />
<span>I have added observability to the <span class='inlinecode'>#Kubernetes</span> cluster in the eighth part of my <span class='inlinecode'>#Kubernetes</span> on <span class='inlinecode'>#FreeBSD</span> series. <span class='inlinecode'>#Grafana</span> <span class='inlinecode'>#Loki</span> <span class='inlinecode'>#Prometheus</span> <span class='inlinecode'>#Alloy</span> <span class='inlinecode'>#k3s</span> <span class='inlinecode'>#OpenBSD</span> <span class='inlinecode'>#RockyLinux</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.gmi'>foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html</a><br />
<br />
<h3 style='display: inline' id='wondering-where-i-could-make-use-of-it-'>Wondering where I could make use of it ...</h3><br />
<br />
<span>Wondering where I could make use of it blog/2025/12/an-svg-is-all-you-need.mld <span class='inlinecode'>#SVG</span></span><br />
<br />
<a class='textlink' href='https://jon.recoil.org/blog/2025/12/an-svg-is-all-you-need.html'>jon.recoil.org/blog/2025/12/an-svg-is-all-you-need.html</a><br />
<br />
<h3 style='display: inline' id='trying-out-cosmic-desktop-seems-'>Trying out <span class='inlinecode'>#COSMIC</span> <span class='inlinecode'>#Desktop</span>... seems ...</h3><br />
<br />
<span>Trying out <span class='inlinecode'>#COSMIC</span> <span class='inlinecode'>#Desktop</span>... seems snappier than <span class='inlinecode'>#GNOME</span> and I like the tiling features...</span><br />
<br />
<h3 style='display: inline' id='best-thing-i-ve-ever-read-about-container-'>Best thing I&#39;ve ever read about <span class='inlinecode'>#container</span> ...</h3><br />
<br />
<span>Best thing I&#39;ve ever read about <span class='inlinecode'>#container</span> <span class='inlinecode'>#security</span> in <span class='inlinecode'>#kubernetes</span>:</span><br />
<br />
<a class='textlink' href='https://learnkube.com/security-contexts'>learnkube.com/security-contexts</a><br />
<br />
<h3 style='display: inline' id='while-acknowledging-luck-in-finding-the-right-'>While acknowledging luck in finding the right ...</h3><br />
<br />
<span>While acknowledging luck in finding the right team and company culture, the author stresses that staying and choosing long-term ownership is a deliberate choice for those valuing deep technical ownership over external validation: Why I Ignore The Spotlight as a Staff Engineer <span class='inlinecode'>#engineering</span></span><br />
<br />
<a class='textlink' href='https://lalitm.com/software-engineering-outside-the-spotlight/'>lalitm.com/software-engineering-outside-the-spotlight/</a><br />
<br />
<h3 style='display: inline' id='great-explanation-slo-sla-sli-sre-'>Great explanation <span class='inlinecode'>#slo</span> <span class='inlinecode'>#sla</span> <span class='inlinecode'>#sli</span> <span class='inlinecode'>#sre</span> ...</h3><br />
<br />
<span>Great explanation <span class='inlinecode'>#slo</span> <span class='inlinecode'>#sla</span> <span class='inlinecode'>#sli</span> <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://blog.alexewerlof.com/p/sla-vs-slo'>blog.alexewerlof.com/p/sla-vs-slo</a><br />
<br />
<h3 style='display: inline' id='nice-service-you-send-a-drive-they-host-'>Nice service, you send a drive, they host ...</h3><br />
<br />
<span>Nice service, you send a drive, they host <span class='inlinecode'>#ZFS</span> for you!</span><br />
<br />
<a class='textlink' href='https://zfs.rent/'>zfs.rent/</a><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2025-01-01-posts-from-october-to-december-2024.html'>2025-01-01 Posts from October to December 2024</a><br />
<a class='textlink' href='./2025-07-01-posts-from-january-to-june-2025.html'>2025-07-01 Posts from January to June 2025</a><br />
<a class='textlink' href='./2026-01-01-posts-from-july-to-december-2025.html'>2026-01-01 Posts from July to December 2025 (You are currently reading this)</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Cloudless Kobo Forma with KOReader</title>
        <link href="https://foo.zone/gemfeed/2026-01-01-cloudless-kobo-forma-with-koreader.html" />
        <id>https://foo.zone/gemfeed/2026-01-01-cloudless-kobo-forma-with-koreader.html</id>
        <updated>2025-12-31T16:08:33+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I am an reader, and for years I've been searching for a good digital e-reader to complement my paper books. I advocate for privacy-first and prefer open-source or self-hosted solutions. If that is not possible, I opt for offline solutions. Even if I don't have anything to hide, the tinkerer in me wants those things anyway. I found my ideal device in the Kobo Forma 7 years ago. Now, I use it without Kobo's cloud sync, and in this post, I'll show you how.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='cloudless-kobo-forma-with-koreader'>Cloudless Kobo Forma with KOReader</h1><br />
<br />
<span class='quote'>Published at 2025-12-31T16:08:33+02:00</span><br />
<br />
<span>I am an reader, and for years I&#39;ve been searching for a good digital e-reader to complement my paper books. I advocate for privacy-first and prefer open-source or self-hosted solutions. If that is not possible, I opt for offline solutions. Even if I don&#39;t have anything to hide, the tinkerer in me wants those things anyway. I found my ideal device in the Kobo Forma 7 years ago. Now, I use it without Kobo&#39;s cloud sync, and in this post, I&#39;ll show you how.</span><br />
<br />
<pre>
Art by Donovan Bake

      __...--~~~~~-._   _.-~~~~~--...__
    //               `V&#39;               \\ 
   //                 |                 \\ 
  //__...--~~~~~~-._  |  _.-~~~~~~--...__\\ 
 //__.....----~~~~._\ | /_.~~~~----.....__\\
====================\\|//====================
                dwb `---`
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#cloudless-kobo-forma-with-koreader'>Cloudless Kobo Forma with KOReader</a></li>
<li>⇢ <a href='#koreader-to-the-rescue'>KOReader to the Rescue</a></li>
<li>⇢ ⇢ <a href='#installation'>Installation</a></li>
<li>⇢ <a href='#sideloaded-mode'>Sideloaded Mode</a></li>
<li>⇢ <a href='#my-workflow'>My Workflow</a></li>
<li>⇢ ⇢ <a href='#sideloading-books'>Sideloading Books</a></li>
<li>⇢ ⇢ <a href='#koreader-sync-server'>KOReader Sync Server</a></li>
<li>⇢ ⇢ <a href='#exporting-book-notes-and-highlights'>Exporting Book Notes and Highlights</a></li>
<li>⇢ ⇢ <a href='#wallabag-integration'>Wallabag Integration</a></li>
<li>⇢ ⇢ <a href='#purchasing-e-books'>Purchasing e-books</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<br />
<span>I initially bought the Kobo Forma because I wanted a device with a large screen for reading PDFs and ePubs. However, as time went on, I became more concerned about the privacy implications of having all my reading data synced to the Kobo cloud. So, I looked into alternative ways to use this device.</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/forma.jpg'><img alt='KOReader running on Kobo Forma' title='KOReader running on Kobo Forma' src='./cloudless-kobo-forma-with-koreader/forma.jpg' /></a><br />
<br />
<span>The Kobo Forma is so old that it can&#39;t be purchased from Kobo directly anymore. But I love the form factor; it&#39;s much lighter than the Kobo Sage and still has a 7" screen. It&#39;s just that the stock firmware is becoming too slow and sluggish.</span><br />
<br />
<a class='textlink' href='https://gl.kobobooks.com/products/kobo-forma'>Kobo Forma</a><br />
<br />
<span>Note: Some of the screenshots in this post are taken from my Kobo Clara HD, which is another Kobo eReader I have. It&#39;s smaller and better for travel, and I use the same KOReader setup on both devices.</span><br />
<br />
<h2 style='display: inline' id='koreader-to-the-rescue'>KOReader to the Rescue</h2><br />
<br />
<span>I keep my Kobo Forma disconnected from the cloud entirely, and KOReader makes that possible. KOReader is a versatile, open-source document and image viewer which can also be installed on some E Ink reader devices like the Kobo Forma. No cloud sync, no tracking, just reading.</span><br />
<br />
<a class='textlink' href='https://koreader.rocks/'>KOReader</a><br />
<br />
<span>By not syncing my reading progress and library to Kobo&#39;s cloud service, I retain full ownership and control over my data. There&#39;s no risk of my personal reading habits being accessed or mined by third parties. </span><br />
<br />
<h3 style='display: inline' id='installation'>Installation</h3><br />
<br />
<span>Installing KOReader is straightforward. You can follow the official guide for that. I used the Linux one: </span><br />
<br />
<a class='textlink' href='https://github.com/koreader/koreader/wiki/Installation-on-desktop-linux'>https://github.com/koreader/koreader/wiki/Installation-on-desktop-linux</a><br />
<br />
<span>Basically, what I had to do is to download a <span class='inlinecode'>.zip</span> file of the KOReader binary and an <span class='inlinecode'>install.sh</span> script. Then, I plugged in the Kobo Forma via USB and ran the install script, which did the rest for me.</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/install.jpg'><img alt='KOReader installation via USB' title='KOReader installation via USB' src='./cloudless-kobo-forma-with-koreader/install.jpg' /></a><br />
<br />
<span>After the initial install, KOReader can update itself through its menus.</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/update.jpg'><img alt='KOReader self-update menu' title='KOReader self-update menu' src='./cloudless-kobo-forma-with-koreader/update.jpg' /></a><br />
<br />
<span>It is worth noting that after the KOReader install, the Kobo Forma still boots into the proprietary window manager. To start KOReader, you have to select it from the new "Nickel Menu". KOReader will then stay open until you reboot the device. It&#39;s a small annoyance, but it&#39;s well worth it!</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/nickel-menu.jpg'><img alt='Nickel Menu' title='Nickel Menu' src='./cloudless-kobo-forma-with-koreader/nickel-menu.jpg' /></a><br />
<br />
<h2 style='display: inline' id='sideloaded-mode'>Sideloaded Mode</h2><br />
<br />
<span>To use the Kobo Forma completely without a Kobo account, you can enable "Sideloaded Mode". This mode allows you to use the device without being signed in to a Kobo account. When enabled, the home screen will default to your library instead of showing Kobo recommendations, and the sync button will disappear. This prevents the device from trying to sync with the Kobo cloud.</span><br />
<br />
<span>To enable it, you need to edit the configuration file. Connect your Kobo device to your computer via USB. Open the file <span class='inlinecode'>.kobo/Kobo/Kobo eReader.conf</span> and add the following lines:</span><br />
<br />
<pre>
[ApplicationPreferences]
SideloadedMode=true
</pre>
<br />
<span>After saving the file, eject the device. You might need to restart it for the changes to take effect.</span><br />
<br />
<span>KOReader is much faster than the stock firmware; it feels about three times as fast. Before trying out KOReader, I was thinking about selling the Forma as it felt too sluggish. But now there is new life in this 7-year-old device! It also offers a night mode (inverted colors), a feature that the stock firmware on the Forma is lacking.</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/dark-mode.jpg'><img alt='KOReader dark mode (inverted colors)' title='KOReader dark mode (inverted colors)' src='./cloudless-kobo-forma-with-koreader/dark-mode.jpg' /></a><br />
<br />
<h2 style='display: inline' id='my-workflow'>My Workflow</h2><br />
<br />
<span>My workflow is simple and efficient, relying on a direct USB connection to my Linux laptop for sideloading books and a self-hosted sync server for progress synchronization.</span><br />
<br />
<h3 style='display: inline' id='sideloading-books'>Sideloading Books</h3><br />
<br />
<span>I connect my Kobo Forma to my Linux laptop via a USB-C cable. The device is automatically recognized as a storage device, and I can directly access its storage to copy over ePubs, PDFs, and other supported formats.</span><br />
<br />
<h3 style='display: inline' id='koreader-sync-server'>KOReader Sync Server</h3><br />
<br />
<span>To keep my reading progress synchronized across multiple devices (my Kobo, my phone, and my Linux laptop), I run a <span class='inlinecode'>koreader-sync-server</span> instance in my k3s cluster. This allows me to pick up reading where I left off, no matter which device I&#39;m using.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/kobo-sync-server'>https://codeberg.org/snonux/conf/src/branch/master/f3s/kobo-sync-server</a><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/sync-server.jpg'><img alt='Custom sync server configuration' title='Custom sync server configuration' src='./cloudless-kobo-forma-with-koreader/sync-server.jpg' /></a><br />
<br />
<span>To configure the sync server in KOReader, open a document, go to "Settings" -&gt; "Progress Sync", and select "Custom sync server". There you can enter the URL of your server and your credentials. The progress can then also be synced to and from KOReader running on other devices (e.g. a Laptop or a Smartphone!)</span><br />
<br />
<a href='./cloudless-kobo-forma-with-koreader/koreader-sync.jpg'><img alt='KOReader sync menu' title='KOReader sync menu' src='./cloudless-kobo-forma-with-koreader/koreader-sync.jpg' /></a><br />
<br />
<h3 style='display: inline' id='exporting-book-notes-and-highlights'>Exporting Book Notes and Highlights</h3><br />
<br />
<span>KOReader allows you to export book notes and highlights directly from the device in various formats, including plain text and Markdown. Unfortunately, these are not automatically synced to the sync server. I have an offline backup procedure where I regularly sync them via USB to my backup server. There&#39;s a 3rd party plugin available for KOReader, which seems to be able to do this kind of sync, though.</span><br />
<br />
<h3 style='display: inline' id='wallabag-integration'>Wallabag Integration</h3><br />
<br />
<span>KOReader has built-in Wallabag support. This allows me to save articles from the web to my self-hosted Wallabag instance and then read them comfortably on my Kobo.</span><br />
<br />
<a class='textlink' href='https://wallabag.org/'>https://wallabag.org/</a><br />
<br />
<span>I haven&#39;t tried it out yet, though. I may will and will update this blog post here after done so.</span><br />
<br />
<h3 style='display: inline' id='purchasing-e-books'>Purchasing e-books</h3><br />
<br />
<span>If you search a little bit you also find stores which sell digital rights management (DRM) free e-books (in ePub format), for example buecher.de does, they sell german and english books. Before purchasing, just make sure that the book is DRM-free (not all their books are that.)</span><br />
<br />
<span>All the books I read you can see here:</span><br />
<br />
<a class='textlink' href='../about/novels.html'>Novels I&#39;ve read</a><br />
<a class='textlink' href='../about/resources.html'>Resources, Technical Books, Podcasts, Courses and Guides I recommend</a><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>I&#39;m really happy with this setup. Offline Kobo with KOReader, manual book transfers, self-hosted services—it&#39;s simple, private, and the reading experience is just great. If you care about owning your data (and not getting distracted), give it a try.</span><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2026-01-01-using-supernote-nomad-offline.html'>2026-01-01 Using Supernote Nomad offline</a><br />
<a class='textlink' href='./2026-01-01-cloudless-kobo-forma-with-koreader.html'>2026-01-01 Cloudless Kobo Forma with KOReader (You are currently reading this)</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>X-RAG Observability Hackathon</title>
        <link href="https://foo.zone/gemfeed/2025-12-24-x-rag-observability-hackathon.html" />
        <id>https://foo.zone/gemfeed/2025-12-24-x-rag-observability-hackathon.html</id>
        <updated>2025-12-24T09:45:29+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This post describes my hackathon efforts adding observability to X-RAG, the extensible Retrieval-Augmented Generation (RAG) platform built by my brother Florian. I made time over the weekend to join his 3-day hackathon (attending 2 days) with the goal of instrumenting his existing distributed system with observability. What started as 'let's add some metrics' turned into a comprehensive implementation of the three pillars of observability: tracing, metrics, and logs.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='x-rag-observability-hackathon'>X-RAG Observability Hackathon</h1><br />
<br />
<span class='quote'>Published at 2025-12-24T09:45:29+02:00</span><br />
<br />
<span>This post describes my hackathon efforts adding observability to X-RAG, the extensible Retrieval-Augmented Generation (RAG) platform built by my brother Florian. I made time over the weekend to join his 3-day hackathon (attending 2 days) with the goal of instrumenting his existing distributed system with observability. What started as "let&#39;s add some metrics" turned into a comprehensive implementation of the three pillars of observability: tracing, metrics, and logs.</span><br />
<br />
<a class='textlink' href='https://github.com/florianbuetow/x-rag'>X-RAG source code on GitHub</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#x-rag-observability-hackathon'>X-RAG Observability Hackathon</a></li>
<li>⇢ <a href='#what-is-x-rag'>What is X-RAG?</a></li>
<li>⇢ <a href='#running-kubernetes-locally-with-kind'>Running Kubernetes locally with Kind</a></li>
<li>⇢ <a href='#motivation'>Motivation</a></li>
<li>⇢ <a href='#the-observability-stack'>The observability stack</a></li>
<li>⇢ <a href='#grafana-alloy-the-unified-collector'>Grafana Alloy: the unified collector</a></li>
<li>⇢ <a href='#centralised-logging-with-loki'>Centralised logging with Loki</a></li>
<li>⇢ ⇢ <a href='#alloy-configuration-for-logs'>Alloy configuration for logs</a></li>
<li>⇢ ⇢ <a href='#querying-logs-with-logql'>Querying logs with LogQL</a></li>
<li>⇢ <a href='#metrics-with-prometheus'>Metrics with Prometheus</a></li>
<li>⇢ ⇢ <a href='#alloy-configuration-for-application-metrics'>Alloy configuration for application metrics</a></li>
<li>⇢ ⇢ <a href='#kubernetes-metrics-kubelet-cadvisor-and-kube-state-metrics'>Kubernetes metrics: kubelet, cAdvisor, and kube-state-metrics</a></li>
<li>⇢ ⇢ <a href='#infrastructure-metrics-kafka-redis-minio'>Infrastructure metrics: Kafka, Redis, MinIO</a></li>
<li>⇢ <a href='#distributed-tracing-with-tempo'>Distributed tracing with Tempo</a></li>
<li>⇢ ⇢ <a href='#understanding-traces-spans-and-the-trace-tree'>Understanding traces, spans, and the trace tree</a></li>
<li>⇢ ⇢ <a href='#how-trace-context-propagates'>How trace context propagates</a></li>
<li>⇢ ⇢ <a href='#implementation'>Implementation</a></li>
<li>⇢ ⇢ <a href='#alloy-configuration-for-traces'>Alloy configuration for traces</a></li>
<li>⇢ <a href='#async-ingestion-trace-walkthrough'>Async ingestion trace walkthrough</a></li>
<li>⇢ ⇢ <a href='#step-1-ingest-a-document'>Step 1: Ingest a document</a></li>
<li>⇢ ⇢ <a href='#step-2-find-the-ingestion-trace'>Step 2: Find the ingestion trace</a></li>
<li>⇢ ⇢ <a href='#step-3-fetch-the-complete-trace'>Step 3: Fetch the complete trace</a></li>
<li>⇢ ⇢ <a href='#step-4-analyse-the-async-trace'>Step 4: Analyse the async trace</a></li>
<li>⇢ ⇢ <a href='#viewing-traces-in-grafana'>Viewing traces in Grafana</a></li>
<li>⇢ <a href='#end-to-end-search-trace-walkthrough'>End-to-end search trace walkthrough</a></li>
<li>⇢ ⇢ <a href='#step-1-make-a-search-request'>Step 1: Make a search request</a></li>
<li>⇢ ⇢ <a href='#step-2-query-tempo-for-the-trace'>Step 2: Query Tempo for the trace</a></li>
<li>⇢ ⇢ <a href='#step-3-analyse-the-trace'>Step 3: Analyse the trace</a></li>
<li>⇢ ⇢ <a href='#step-4-search-traces-with-traceql'>Step 4: Search traces with TraceQL</a></li>
<li>⇢ ⇢ <a href='#viewing-the-search-trace-in-grafana'>Viewing the search trace in Grafana</a></li>
<li>⇢ <a href='#correlating-the-three-signals'>Correlating the three signals</a></li>
<li>⇢ <a href='#grafana-dashboards'>Grafana dashboards</a></li>
<li>⇢ <a href='#results-two-days-well-spent'>Results: two days well spent</a></li>
<li>⇢ <a href='#slis-slos-and-slas'>SLIs, SLOs and SLAs</a></li>
<li>⇢ <a href='#using-amp-for-ai-assisted-development'>Using Amp for AI-assisted development</a></li>
<li>⇢ <a href='#other-changes-along-the-way'>Other changes along the way</a></li>
<li>⇢ <a href='#lessons-learned'>Lessons learned</a></li>
</ul><br />
<h2 style='display: inline' id='what-is-x-rag'>What is X-RAG?</h2><br />
<br />
<span>X-RAG is the extensible RAG (Retrieval-Augmented Generation) platform running on Kubernetes. The idea behind RAG is simple: instead of asking an LLM to answer questions from its training data alone, you first retrieve relevant documents from your own knowledge base, then feed those documents to the LLM as context. The LLM synthesises an answer grounded in your actual content—reducing hallucinations and enabling answers about private or recent information the model was never trained on.</span><br />
<br />
<span>X-RAG handles the full pipeline: ingest documents, chunk them into searchable pieces, generate vector embeddings, store them in a vector database, and at query time, retrieve relevant chunks and pass them to an LLM for answer generation. The system supports both local LLMs (Florian runs his on a beefy desktop) and cloud APIs like OpenAI. I configured an OpenAI API key since my laptop&#39;s CPU and GPU aren&#39;t fast enough for decent local inference.</span><br />
<br />
<span>All services are implemented in Python. I&#39;m more used to Ruby, Go, and Bash these days, but for this project it didn&#39;t matter—Python&#39;s OpenTelemetry integration is straightforward, I wasn&#39;t planning to write or rewrite tons of application code, and with GenAI assistance the language barrier was a non-issue. The OpenTelemetry concepts and patterns should translate to other languages too—the SDK APIs are intentionally similar across Python, Go, Java, and others.</span><br />
<br />
<span>X-RAG consists of several independently scalable microservices:</span><br />
<br />
<ul>
<li>Search UI: FastAPI web interface for queries</li>
<li>Ingestion API: Document upload endpoint</li>
<li>Embedding Service: gRPC service for vector embeddings</li>
<li>Indexer: Kafka consumer that processes documents</li>
<li>Search Service: gRPC service orchestrating the RAG pipeline</li>
</ul><br />
<span>The Embedding Service deserves extra explanation because in the beginning I didn&#39;t really knew what it was. Text isn&#39;t directly searchable in a vector database—you need to convert it to numerical vectors (embeddings) that capture semantic meaning. The Embedding Service takes text chunks and calls an embedding model (OpenAI&#39;s <span class='inlinecode'>text-embedding-3-small</span> in my case, or a local model on Florian&#39;s setup) to produce these vectors. For the LLM search completion answer, I used <span class='inlinecode'>gpt-4o-mini</span>.</span><br />
<br />
<span>Similar concepts end up with similar vectors, so "What is machine learning?" and "Explain ML" produce vectors close together in the embedding space. At query time, your question gets embedded too, and the vector database finds chunks with nearby vectors—that&#39;s semantic search.</span><br />
<br />
<span>The data layer includes Weaviate (vector database with hybrid search), Kafka (message queue), MinIO (object storage), and Redis (cache). All of this runs in a Kind Kubernetes cluster for local development, with the same manifests deployable to production.</span><br />
<br />
<pre>
┌─────────────────────────────────────────────────────────────────────────┐
│                      X-RAG Kubernetes Cluster                           │
├─────────────────────────────────────────────────────────────────────────┤
│   ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐    │
│   │ Search UI   │  │Search Svc   │  │Embed Service│  │   Indexer   │    │
│   └──────┬──────┘  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘    │
│          │                │                │                │           │
│          └────────────────┴────────────────┴────────────────┘           │
│                                    │                                    │
│                                    ▼                                    │
│          ┌─────────────┐  ┌─────────────┐  ┌─────────────┐              │
│          │  Weaviate   │  │   Kafka     │  │   MinIO     │              │
│          └─────────────┘  └─────────────┘  └─────────────┘              │
└─────────────────────────────────────────────────────────────────────────┘
</pre>
<br />
<h2 style='display: inline' id='running-kubernetes-locally-with-kind'>Running Kubernetes locally with Kind</h2><br />
<br />
<span>X-RAG runs on Kubernetes, but you don&#39;t need a cloud account to develop it. The project uses Kind (Kubernetes in Docker)—a tool originally created by the Kubernetes SIG for testing Kubernetes itself.</span><br />
<br />
<a class='textlink' href='https://kind.sigs.k8s.io/'>Kind - Kubernetes in Docker</a><br />
<br />
<span>Kind spins up a full Kubernetes cluster using Docker containers as nodes. The control plane (API server, etcd, scheduler, controller-manager) runs in one container, and worker nodes run in separate containers. Inside these "node containers," pods run just like they would on real servers—using containerd as the container runtime. It&#39;s containers all the way down.</span><br />
<br />
<span>Technically, each Kind node is a Docker container running a minimal Linux image with kubelet and containerd installed. When you deploy a pod, kubelet inside the node container instructs containerd to pull and run the container image. So you have Docker running node containers, and inside those, containerd running application containers. Network-wise, Kind sets up a Docker bridge network and uses CNI plugins (kindnet by default) for pod networking within the cluster.</span><br />
<br />
<pre>
$ docker ps --format "table {{.Names}}\t{{.Image}}"
NAMES                  IMAGE
xrag-k8-control-plane  kindest/node:v1.32.0
xrag-k8-worker         kindest/node:v1.32.0
xrag-k8-worker2        kindest/node:v1.32.0
</pre>
<br />
<span>The <span class='inlinecode'>kindest/node</span> image contains everything needed: kubelet, containerd, CNI plugins, and pre-pulled pause containers. Port mappings in the Kind config expose services to the host—that&#39;s how http://localhost:8080 reaches the search-ui running inside a pod, inside a worker container, inside Docker.</span><br />
<br />
<pre>
┌─────────────────────────────────────────────────────────────────────────┐
│                           Docker Host                                   │
├─────────────────────────────────────────────────────────────────────────┤
│  ┌───────────────────┐  ┌───────────────────┐  ┌───────────────────┐    │
│  │ xrag-k8-control   │  │ xrag-k8-worker    │  │ xrag-k8-worker2   │    │
│  │ -plane (container)│  │ (container)       │  │ (container)       │    │
│  │                   │  │                   │  │                   │    │
│  │ K8s API server    │  │ Pods:             │  │ Pods:             │    │
│  │ etcd, scheduler   │  │ • search-ui       │  │ • weaviate        │    │
│  │                   │  │ • search-service  │  │ • kafka           │    │
│  │                   │  │ • embedding-svc   │  │ • prometheus      │    │
│  │                   │  │ • indexer         │  │ • grafana         │    │
│  └───────────────────┘  └───────────────────┘  └───────────────────┘    │
└─────────────────────────────────────────────────────────────────────────┘
</pre>
<br />
<span>Why Kind? It gives you a real Kubernetes environment—the same manifests deploy to production clouds unchanged. No minikube quirks, no Docker Compose translation layer. Just Kubernetes. I already have a k3s cluster running at home, but Kind made collaboration easier—everyone working on X-RAG gets the exact same setup by cloning the repo and running <span class='inlinecode'>make cluster-start</span>.</span><br />
<br />
<span>Florian developed X-RAG on macOS, but it worked seamlessly on my Linux laptop. The only difference was Docker&#39;s resource allocation: on macOS you configure limits in Docker Desktop, on Linux it uses host resources directly. That&#39;s because under macOS the Linux Docker containers run on an emulation layer as macOS is not Linux.</span><br />
<br />
<span>My hardware: a ThinkPad X1 Carbon Gen 9 with an 11th Gen Intel Core i7-1185G7 (4 cores, 8 threads at 3.00GHz) and 32GB RAM (running Fedora Linux). During the hackathon, memory usage peaked around 15GB—comfortable headroom. CPU was the bottleneck; with ~38 pods running across all namespaces (rag-system, monitoring, kube-system, etc.), plus Discord for the remote video call and Tidal streaming hi-res music, things got tight. When rebuilding Docker images or restarting the cluster, Discord video and audio would stutter—my fellow hackers probably wondered why I kept freezing mid-sentence. A beefier CPU would have meant less waiting and smoother calls, but it was manageable.</span><br />
<br />
<h2 style='display: inline' id='motivation'>Motivation</h2><br />
<br />
<span>When I joined the hackathon, Florian&#39;s X-RAG was functional but opaque. With five services communicating via gRPC, Kafka, and HTTP, debugging was cumbersome. When a search request take 5 seconds, there was no visibility into where the time was being spent. Was it the embedding generation? The vector search? The LLM synthesis? Nobody would be able to figure it out quickly.</span><br />
<br />
<span>Distributed systems are inherently opaque. Each service logs its own view of the world, but correlating events across service boundaries is archaeology. Grepping through logs on many pods, trying to mentally reconstruct what happened—not fun. This was the perfect hackathon project: Explore this Observability Stack in greater depth.</span><br />
<br />
<h2 style='display: inline' id='the-observability-stack'>The observability stack</h2><br />
<br />
<span>Before diving into implementation, here&#39;s what I deployed. The complete stack runs in the monitoring namespace:</span><br />
<br />
<pre>
$ kubectl get pods -n monitoring
NAME                                  READY   STATUS
alloy-84ddf4cd8c-7phjp                1/1     Running
grafana-6fcc89b4d6-pnh8l              1/1     Running
kube-state-metrics-5d954c569f-2r45n   1/1     Running
loki-8c9bbf744-sc2p5                  1/1     Running
node-exporter-kb8zz                   1/1     Running
node-exporter-zcrdz                   1/1     Running
node-exporter-zmskc                   1/1     Running
prometheus-7f755f675-dqcht            1/1     Running
tempo-55df7dbcdd-t8fg9                1/1     Running
</pre>
<br />
<span>Each component has a specific role:</span><br />
<br />
<ul>
<li><span class='inlinecode'>Grafana Alloy</span>: The unified collector. Receives OTLP from applications, scrapes Prometheus endpoints, tails log files. Think of it as the central nervous system.</li>
<li><span class='inlinecode'>Prometheus</span>: Time-series database for metrics. Stores counters, gauges, and histograms with 15-day retention.</li>
<li><span class='inlinecode'>Tempo</span>: Trace storage. Receives spans via OTLP, correlates them by trace ID, enables TraceQL queries.</li>
<li><span class='inlinecode'>Loki</span>: Log aggregation. Indexes labels (namespace, pod, container), stores log chunks, enables LogQL queries.</li>
<li><span class='inlinecode'>Grafana</span>: The unified UI. Queries all three backends, correlates signals, displays dashboards.</li>
<li><span class='inlinecode'>kube-state-metrics</span>: Exposes Kubernetes object metrics (pod status, deployments, resource requests).</li>
<li><span class='inlinecode'>node-exporter</span>: Exposes host-level metrics (CPU, memory, disk, network) from each Kubernetes node.</li>
</ul><br />
<span>Everything is accessible via port-forwards:</span><br />
<br />
<ul>
<li>Grafana: http://localhost:3000 (unified UI for all three signals)</li>
<li>Prometheus: http://localhost:9090 (metrics queries)</li>
<li>Tempo: http://localhost:3200 (trace queries)</li>
<li>Loki: http://localhost:3100 (log queries)</li>
</ul><br />
<h2 style='display: inline' id='grafana-alloy-the-unified-collector'>Grafana Alloy: the unified collector</h2><br />
<br />
<span>Before diving into the individual signals, I want to highlight Grafana Alloy—the component that ties everything together. Alloy is Grafana&#39;s vendor-neutral OpenTelemetry Collector distribution, and it became the backbone of the observability stack.</span><br />
<br />
<a class='textlink' href='https://grafana.com/docs/alloy/latest/'>Grafana Alloy documentation</a><br />
<br />
<span>Why use a centralised collector instead of having each service push directly to backends?</span><br />
<br />
<ul>
<li><span class='inlinecode'>Decoupling</span>: Applications don&#39;t need to know about Prometheus, Tempo, or Loki. They speak OTLP, and Alloy handles the translation.</li>
<li><span class='inlinecode'>Unified timestamps</span>: All telemetry flows through one system, making correlation in Grafana more reliable.</li>
<li><span class='inlinecode'>Processing pipeline</span>: Batch data before sending, filter noisy metrics, enrich with labels—all in one place.</li>
<li><span class='inlinecode'>Backend flexibility</span>: Switch from Tempo to Jaeger without changing application code.</li>
</ul><br />
<span>Alloy uses a configuration language called River, which feels similar to Terraform&#39;s HCL—declarative blocks with attributes. If you&#39;ve written Terraform, River will look familiar. The full Alloy configuration runs to over 1400 lines with comments explaining each section. It handles OTLP receiving, batch processing, Prometheus export, Tempo export, Kubernetes metrics scraping, infrastructure metrics, and pod log collection. All three signals—metrics, traces, logs—flow through this single component, making Alloy the central nervous system of the observability stack.</span><br />
<br />
<span>In the following sections, I&#39;ll cover each observability pillar and show the relevant Alloy configuration for each.</span><br />
<br />
<h2 style='display: inline' id='centralised-logging-with-loki'>Centralised logging with Loki</h2><br />
<br />
<span>Getting all logs in one place was the foundation. I deployed Grafana Loki in the monitoring namespace, with Grafana Alloy running as a DaemonSet on each node to collect logs.</span><br />
<br />
<pre>
┌──────────────────────────────────────────────────────────────────────┐
│                           LOGS PIPELINE                              │
├──────────────────────────────────────────────────────────────────────┤
│  Applications write to stdout → containerd stores in /var/log/pods   │
│                                    │                                 │
│                              File tail                               │
│                                    ▼                                 │
│                         Grafana Alloy (DaemonSet)                    │
│                    Discovers pods, extracts metadata                 │
│                                    │                                 │
│                       HTTP POST /loki/api/v1/push                    │
│                                    ▼                                 │
│                           Grafana Loki                               │
│                   Indexes labels, stores chunks                      │
└──────────────────────────────────────────────────────────────────────┘
</pre>
<br />
<h3 style='display: inline' id='alloy-configuration-for-logs'>Alloy configuration for logs</h3><br />
<br />
<span>Alloy discovers pods via the Kubernetes API, tails their log files from /var/log/pods/, and ships to Loki. Importantly, Alloy runs as a DaemonSet on each worker node—it doesn&#39;t run inside the application pods. Since containerd writes all container stdout/stderr to /var/log/pods/ on the node&#39;s filesystem, Alloy can tail logs for every pod on that node from a single location without any sidecar injection:</span><br />
<br />
<pre>
loki.source.kubernetes "pod_logs" {
  targets    = discovery.relabel.pod_logs.output
  forward_to = [loki.process.pod_logs.receiver]
}

loki.write "default" {
  endpoint {
    url = "http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push"
  }
}
</pre>
<br />
<h3 style='display: inline' id='querying-logs-with-logql'>Querying logs with LogQL</h3><br />
<br />
<span>Now I could query logs in Loki (e.g. via Grafana UI) with LogQL:</span><br />
<br />
<pre>
{namespace="rag-system", container="search-ui"} |= "ERROR"
</pre>
<br />
<h2 style='display: inline' id='metrics-with-prometheus'>Metrics with Prometheus</h2><br />
<br />
<span>I added Prometheus metrics to every service. Following the Four Golden Signals (latency, traffic, errors, saturation), I instrumented the codebase with histograms, counters, and gauges:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">from</font></u></b> prometheus_client <b><u><font color="#000000">import</font></u></b> Histogram, Counter, Gauge

search_duration = Histogram(
    <font color="#808080">"search_service_request_duration_seconds"</font>,
    <font color="#808080">"Total duration of Search Service requests"</font>,
    [<font color="#808080">"method"</font>],
    buckets=[<font color="#000000">0.1</font>, <font color="#000000">0.25</font>, <font color="#000000">0.5</font>, <font color="#000000">1.0</font>, <font color="#000000">2.5</font>, <font color="#000000">5.0</font>, <font color="#000000">10.0</font>, <font color="#000000">20.0</font>, <font color="#000000">30.0</font>, <font color="#000000">60.0</font>],
)

errors_total = Counter(
    <font color="#808080">"search_service_errors_total"</font>,
    <font color="#808080">"Error count by type"</font>,
    [<font color="#808080">"method"</font>, <font color="#808080">"error_type"</font>],
)
</pre>
<br />
<span>Initially, I used Prometheus scraping—each service exposed a /metrics endpoint, and Prometheus pulled metrics every 15 seconds. This worked, but I wanted a unified pipeline.</span><br />
<br />
<h3 style='display: inline' id='alloy-configuration-for-application-metrics'>Alloy configuration for application metrics</h3><br />
<br />
<span>The breakthrough came with Grafana Alloy as an OpenTelemetry collector. Services now push metrics via OTLP (OpenTelemetry Protocol), and Alloy converts them to Prometheus format:</span><br />
<br />
<pre>
┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐
│ search-ui   │  │search-svc   │  │embed-svc    │  │  indexer    │
│ OTel Meter  │  │ OTel Meter  │  │ OTel Meter  │  │ OTel Meter  │
│      │      │  │      │      │  │      │      │  │      │      │
│ OTLPExporter│  │ OTLPExporter│  │ OTLPExporter│  │ OTLPExporter│
└──────┬──────┘  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘
       │                │                │                │
       └────────────────┴────────────────┴────────────────┘
                                 │
                                 ▼ OTLP/gRPC (port 4317)
                        ┌─────────────────────┐
                        │   Grafana Alloy     │
                        └──────────┬──────────┘
                                   │ prometheus.remote_write
                                   ▼
                        ┌─────────────────────┐
                        │    Prometheus       │
                        └─────────────────────┘
</pre>
<br />
<span>Alloy receives OTLP on ports 4317 (gRPC) or 4318 (HTTP), batches the data for efficiency, and exports to Prometheus:</span><br />
<br />
<pre>
otelcol.receiver.otlp "default" {
  grpc { endpoint = "0.0.0.0:4317" }
  http { endpoint = "0.0.0.0:4318" }
  output {
    metrics = [otelcol.processor.batch.metrics.input]
    traces  = [otelcol.processor.batch.traces.input]
  }
}

otelcol.processor.batch "metrics" {
  timeout = "5s"
  send_batch_size = 1000
  output { metrics = [otelcol.exporter.prometheus.default.input] }
}

otelcol.exporter.prometheus "default" {
  forward_to = [prometheus.remote_write.prom.receiver]
}
</pre>
<br />
<span>Instead of sending each metric individually, Alloy accumulates up to 1000 metrics (or waits 5 seconds) before flushing. This reduces network overhead and protects backends from being overwhelmed.</span><br />
<br />
<h3 style='display: inline' id='kubernetes-metrics-kubelet-cadvisor-and-kube-state-metrics'>Kubernetes metrics: kubelet, cAdvisor, and kube-state-metrics</h3><br />
<br />
<span>Alloy also pulls metrics from Kubernetes itself—kubelet resource metrics, cAdvisor container metrics, and kube-state-metrics for cluster state.</span><br />
<br />
<span>Why three separate sources? It does feel fragmented, but each serves a distinct purpose. <span class='inlinecode'>kubelet</span> exposes resource metrics about pod CPU and memory usage from its own bookkeeping—lightweight summaries of what&#39;s running on each node. <span class='inlinecode'>cAdvisor</span> (Container Advisor) runs inside kubelet and provides detailed container-level metrics: CPU throttling, memory working sets, filesystem I/O, network bytes. These are the raw runtime stats from containerd. <span class='inlinecode'>kube-state-metrics</span> is different—it doesn&#39;t measure resource usage at all. Instead, it queries the Kubernetes API and exposes the *desired state*: how many replicas a Deployment wants, whether a Pod is pending or running, what resource requests and limits are configured. You need all three because "container used 500MB" (cAdvisor), "pod requested 1GB" (kube-state-metrics), and "node has 4GB available" (kubelet) are complementary views. The fragmentation is a consequence of Kubernetes&#39; architecture—no single component has the complete picture.</span><br />
<br />
<span>None of these components speak OpenTelemetry—they all expose Prometheus-format metrics via HTTP endpoints. That&#39;s why Alloy uses <span class='inlinecode'>prometheus.scrape</span> instead of receiving OTLP pushes. Alloy handles both worlds: OTLP from our applications, Prometheus scraping for infrastructure.</span><br />
<br />
<pre>
prometheus.scrape "kubelet_resource" {
  targets         = discovery.relabel.kubelet.output
  job_name        = "kubelet-resource"
  scheme          = "https"
  scrape_interval = "30s"
  bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
  tls_config { insecure_skip_verify = true }
  forward_to      = [prometheus.remote_write.prom.receiver]
}

prometheus.scrape "cadvisor" {
  targets         = discovery.relabel.cadvisor.output
  job_name        = "cadvisor"
  scheme          = "https"
  scrape_interval = "60s"
  bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
  tls_config { insecure_skip_verify = true }
  forward_to      = [prometheus.relabel.cadvisor_filter.receiver]
}

prometheus.scrape "kube_state_metrics" {
  targets = [
    {"__address__" = "kube-state-metrics.monitoring.svc.cluster.local:8080"},
  ]
  job_name        = "kube-state-metrics"
  scrape_interval = "30s"
  forward_to      = [prometheus.relabel.kube_state_filter.receiver]
}
</pre>
<br />
<span>Note that <span class='inlinecode'>kubelet</span> and <span class='inlinecode'>cAdvisor</span> require HTTPS with bearer token authentication (using the service account token mounted by Kubernetes), while <span class='inlinecode'>kube-state-metrics</span> is a simple HTTP target. <span class='inlinecode'>cAdvisor</span> is scraped less frequently (60s) because it returns many more metrics with higher cardinality.</span><br />
<br />
<h3 style='display: inline' id='infrastructure-metrics-kafka-redis-minio'>Infrastructure metrics: Kafka, Redis, MinIO</h3><br />
<br />
<span>Application metrics weren&#39;t enough. I also needed visibility into the data layer. Each infrastructure component has a specific role in X-RAG and got its own exporter:</span><br />
<br />
<span><span class='inlinecode'>Redis</span> is the caching layer. It stores search results and embeddings to avoid redundant API calls to OpenAI. We collect 25 metrics via oliver006/redis_exporter running as a sidecar, including cache hit/miss rates, memory usage, connected clients, and command latencies. The key metric? <span class='inlinecode'>redis_keyspace_hits_total / (redis_keyspace_hits_total + redis_keyspace_misses_total)</span> tells you if caching is actually helping.</span><br />
<br />
<span><span class='inlinecode'>Kafka</span> is the message queue connecting the ingestion API to the indexer. Documents are published to a topic, and the indexer consumes them asynchronously. We collect 12 metrics via danielqsj/kafka-exporter, with consumer lag being the most critical—it shows how far behind the indexer is. High lag means documents aren&#39;t being indexed fast enough.</span><br />
<br />
<span><span class='inlinecode'>MinIO</span> is the S3-compatible object storage where raw documents are stored before processing. We collect 16 metrics from its native /minio/v2/metrics/cluster endpoint, covering request rates, error counts, storage usage, and cluster health.</span><br />
<br />
<span>You can verify these counts by querying Prometheus directly:</span><br />
<br />
<pre>
$ curl -s &#39;http://localhost:9090/api/v1/label/__name__/values&#39; \
    | jq -r &#39;.data[]&#39; | grep -c &#39;^redis_&#39;
25
$ curl -s &#39;http://localhost:9090/api/v1/label/__name__/values&#39; \
    | jq -r &#39;.data[]&#39; | grep -c &#39;^kafka_&#39;
12
$ curl -s &#39;http://localhost:9090/api/v1/label/__name__/values&#39; \
    | jq -r &#39;.data[]&#39; | grep -c &#39;^minio_&#39;
16
</pre>
<br />
<a class='textlink' href='https://github.com/florianbuetow/x-rag/blob/main/infra/k8s/monitoring/alloy-config.yaml'>Full Alloy configuration with detailed metric filtering</a><br />
<br />
<span>Alloy scrapes all of these and remote-writes to Prometheus:</span><br />
<br />
<pre>
prometheus.scrape "redis_exporter" {
  targets = [
    {"__address__" = "xrag-redis.rag-system.svc.cluster.local:9121"},
  ]
  job_name        = "redis"
  scrape_interval = "30s"
  forward_to      = [prometheus.relabel.redis_filter.receiver]
}

prometheus.scrape "kafka_exporter" {
  targets = [
    {"__address__" = "kafka-exporter.rag-system.svc.cluster.local:9308"},
  ]
  job_name        = "kafka"
  scrape_interval = "30s"
  forward_to      = [prometheus.relabel.kafka_filter.receiver]
}

prometheus.scrape "minio" {
  targets = [
    {"__address__" = "xrag-minio.rag-system.svc.cluster.local:9000"},
  ]
  job_name     = "minio"
  metrics_path = "/minio/v2/metrics/cluster"
  scrape_interval = "30s"
  forward_to   = [prometheus.relabel.minio_filter.receiver]
}
</pre>
<br />
<span>Note that MinIO exposes metrics at a custom path (<span class='inlinecode'>/minio/v2/metrics/cluster</span>) rather than the default <span class='inlinecode'>/metrics</span>. Each exporter forwards to a relabel component that filters down to essential metrics before sending to Prometheus.</span><br />
<br />
<span>With all metrics in Prometheus, I can use PromQL queries in Grafana dashboards. For example, to check Kafka consumer lag and see if the indexer is falling behind:</span><br />
<br />
<pre>
sum by (consumergroup, topic) (kafka_consumergroup_lag)
</pre>
<br />
<span>Or check Redis cache effectiveness:</span><br />
<br />
<pre>
redis_keyspace_hits_total / (redis_keyspace_hits_total + redis_keyspace_misses_total)
</pre>
<br />
<h2 style='display: inline' id='distributed-tracing-with-tempo'>Distributed tracing with Tempo</h2><br />
<br />
<h3 style='display: inline' id='understanding-traces-spans-and-the-trace-tree'>Understanding traces, spans, and the trace tree</h3><br />
<br />
<span>Before diving into the implementation, let me explain the core concepts I learned. A <span class='inlinecode'>trace</span> represents a single request&#39;s journey through the entire distributed system. Think of it as a receipt that follows your request from the moment it enters the system until the final response.</span><br />
<br />
<span>Each trace is identified by a <span class='inlinecode'>trace ID</span>—a 128-bit identifier (32 hex characters) that stays constant across all services. When I make a search request, every service handling that request uses the same trace ID: <span class='inlinecode'>9df981cac91857b228eca42b501c98c6</span>.</span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=KPGjqus5qFo'>Quick video explaining the difference between trace IDs and span IDs in OpenTelemetry</a><br />
<br />
<span>Within a trace, individual operations are recorded as <span class='inlinecode'>spans</span>. A span has:</span><br />
<br />
<ul>
<li>A <span class='inlinecode'>span ID</span>: 64-bit identifier (16 hex characters) unique to this operation</li>
<li>A <span class='inlinecode'>parent span ID</span>: links this span to its caller</li>
<li>A <span class='inlinecode'>name</span>: what operation this represents (e.g., "POST /api/search")</li>
<li><span class='inlinecode'>Start time</span> and <span class='inlinecode'>duration</span></li>
<li><span class='inlinecode'>Attributes</span>: key-value metadata (e.g., <span class='inlinecode'>http.status_code=200</span>)</li>
</ul><br />
<span>The first span in a trace is the <span class='inlinecode'>root span</span>—it has no parent. When the root span calls another service, that service creates a <span class='inlinecode'>child span</span> with the root&#39;s span ID as its parent. This parent-child relationship forms a <span class='inlinecode'>tree structure</span>:</span><br />
<br />
<pre>
                        ┌─────────────────────────┐
                        │      Root Span          │
                        │  POST /api/search       │
                        │  span_id: a1b2c3d4...   │
                        │  parent: (none)         │
                        └───────────┬─────────────┘
                                    │
              ┌─────────────────────┴─────────────────────┐
              │                                           │
              ▼                                           ▼
┌─────────────────────────┐             ┌─────────────────────────┐
│      Child Span         │             │      Child Span         │
│  gRPC Search            │             │  render_template        │
│  span_id: e5f6g7h8...   │             │  span_id: i9j0k1l2...   │
│  parent: a1b2c3d4...    │             │  parent: a1b2c3d4...    │
└───────────┬─────────────┘             └─────────────────────────┘
            │
            ├──────────────────┬──────────────────┐
            ▼                  ▼                  ▼
     ┌────────────┐     ┌────────────┐     ┌────────────┐
     │ Grandchild │     │ Grandchild │     │ Grandchild │
     │ embedding  │     │ vector     │     │ llm.rag    │
     │ .generate  │     │ _search    │     │ _completion│
     └────────────┘     └────────────┘     └────────────┘
</pre>
<br />
<span>This tree structure answers the critical question: "What called what?" When I see a slow span, I can trace up to see what triggered it and down to see what it&#39;s waiting on.</span><br />
<br />
<h3 style='display: inline' id='how-trace-context-propagates'>How trace context propagates</h3><br />
<br />
<span>The magic that links spans across services is <span class='inlinecode'>trace context propagation</span>. When Service A calls Service B, it must pass along the trace ID and its own span ID (which becomes the parent). OpenTelemetry uses the W3C <span class='inlinecode'>traceparent</span> header:</span><br />
<br />
<pre>
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
             │   │                                │                 │
             │   │                                │                 └── flags
             │   │                                └── parent span ID (16 hex)
             │   └── trace ID (32 hex)
             └── version
</pre>
<br />
<span>For HTTP, this travels as a request header. For gRPC, it&#39;s passed as metadata. For Kafka, it&#39;s embedded in message headers. The receiving service extracts this context, creates a new span with the propagated trace ID and the caller&#39;s span ID as parent, then continues the chain.</span><br />
<br />
<span>This is why all my spans link together—OpenTelemetry&#39;s auto-instrumentation handles propagation automatically for HTTP, gRPC, and Kafka clients.</span><br />
<br />
<h3 style='display: inline' id='implementation'>Implementation</h3><br />
<br />
<span>This is where distributed tracing made the difference. I integrated OpenTelemetry auto-instrumentation for FastAPI, gRPC, and HTTP clients, plus manual spans for RAG-specific operations:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">from</font></u></b> opentelemetry.instrumentation.fastapi <b><u><font color="#000000">import</font></u></b> FastAPIInstrumentor
<b><u><font color="#000000">from</font></u></b> opentelemetry.instrumentation.grpc <b><u><font color="#000000">import</font></u></b> GrpcAioInstrumentorClient

<i><font color="silver"># Auto-instrument frameworks</font></i>
FastAPIInstrumentor.instrument_app(app)
GrpcAioInstrumentorClient().instrument()

<i><font color="silver"># Manual spans for custom operations</font></i>
with tracer.start_as_current_span(<font color="#808080">"llm.rag_completion"</font>) as span:
    span.set_attribute(<font color="#808080">"llm.model"</font>, model_name)
    result = <b><u><font color="#000000">await</font></u></b> generate_answer(query, context)
</pre>
<br />
<span><span class='inlinecode'>Auto-instrumentation</span> is the quick win: one line of code and you get spans for every HTTP request, gRPC call, or database query. The instrumentor patches the framework at runtime, so existing code works without modification. The downside? You only get what the library authors decided to capture—generic HTTP attributes like <span class='inlinecode'>http.method</span> and <span class='inlinecode'>http.status_code</span>, but nothing domain-specific. Auto-instrumented spans also can&#39;t know your business logic, so a slow request shows up as "POST /api/search took 5 seconds" without revealing which internal operation caused the delay.</span><br />
<br />
<span><span class='inlinecode'>Manual spans</span> fill that gap. By wrapping specific operations (like <span class='inlinecode'>llm.rag_completion</span> or <span class='inlinecode'>vector_search.query</span>), you get visibility into your application&#39;s unique behaviour. You can add custom attributes (<span class='inlinecode'>llm.model</span>, <span class='inlinecode'>query.top_k</span>, <span class='inlinecode'>cache.hit</span>) that make traces actually useful for debugging. The downside is maintenance: manual spans are code you write and maintain, and you need to decide where instrumentation adds value versus where it just adds noise. In practice, I found the right balance was auto-instrumentation for framework boundaries (HTTP, gRPC) plus manual spans for the 5-10 operations that actually matter for understanding performance.</span><br />
<br />
<span>The magic is trace context propagation. When the Search UI calls the Search Service via gRPC, the trace ID travels in metadata headers:</span><br />
<br />
<pre>
Metadata: [
  ("traceparent", "00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01"),
  ("content-type", "application/grpc"),
]
</pre>
<br />
<span>Spans from all services are linked by this trace ID, forming a tree:</span><br />
<br />
<pre>
Trace ID: 0af7651916cd43dd8448eb211c80319c

├─ [search-ui] POST /api/search (300ms)
│   │
│   ├─ [search-service] Search (gRPC server) (275ms)
│   │   │
│   │   ├─ [search-service] embedding.generate (50ms)
│   │   │   └─ [embedding-service] Embed (45ms)
│   │   │       └─ POST https://api.openai.com (35ms)
│   │   │
│   │   ├─ [search-service] vector_search.query (100ms)
│   │   │
│   │   └─ [search-service] llm.rag_completion (120ms)
│           └─ openai.chat (115ms)
</pre>
<br />
<h3 style='display: inline' id='alloy-configuration-for-traces'>Alloy configuration for traces</h3><br />
<br />
<span>Traces are collected by Alloy and stored in Grafana Tempo. Alloy batches traces for efficiency before exporting via OTLP:</span><br />
<br />
<pre>
otelcol.processor.batch "traces" {
  timeout = "5s"
  send_batch_size = 500
  output { traces = [otelcol.exporter.otlp.tempo.input] }
}

otelcol.exporter.otlp "tempo" {
  client {
    endpoint = "tempo.monitoring.svc.cluster.local:4317"
    tls { insecure = true }
  }
}
</pre>
<br />
<span>In Tempo&#39;s UI, I can finally see exactly where time is spent. That 5-second query? Turns out the vector search was waiting on a cold Weaviate connection. Now I knew what to fix.</span><br />
<br />
<h2 style='display: inline' id='async-ingestion-trace-walkthrough'>Async ingestion trace walkthrough</h2><br />
<br />
<span>One of the most powerful aspects of distributed tracing is following requests across async boundaries like message queues. The document ingestion pipeline flows through Kafka, creating spans that are linked even though they execute in different processes at different times.</span><br />
<br />
<h3 style='display: inline' id='step-1-ingest-a-document'>Step 1: Ingest a document</h3><br />
<br />
<pre>
$ curl -s -X POST http://localhost:8082/ingest \
  -H "Content-Type: application/json" \
  -d &#39;{
    "text": "This is the X-RAG Observability Guide...",
    "metadata": {
      "title": "X-RAG Observability Guide",
      "source_file": "docs/OBSERVABILITY.md",
      "type": "markdown"
    },
    "namespace": "default"
  }&#39; | jq .
{
  "document_id": "8538656a-ba99-406c-8da7-87c5f0dda34d",
  "status": "accepted",
  "minio_bucket": "documents",
  "minio_key": "8538656a-ba99-406c-8da7-87c5f0dda34d.json",
  "message": "Document accepted for processing"
}
</pre>
<br />
<span>The ingestion API immediately returns—it doesn&#39;t wait for indexing. The document is stored in MinIO and a message is published to Kafka.</span><br />
<br />
<h3 style='display: inline' id='step-2-find-the-ingestion-trace'>Step 2: Find the ingestion trace</h3><br />
<br />
<span>Using Tempo&#39;s HTTP API (port 3200), we can search for traces by span name using TraceQL:</span><br />
<br />
<pre>
$ curl -s -G "http://localhost:3200/api/search" \
  --data-urlencode &#39;q={name="POST /ingest"}&#39; \
  --data-urlencode &#39;limit=3&#39; | jq &#39;.traces[0].traceID&#39;
"b3fc896a1cf32b425b8e8c46c86c76f7"
</pre>
<br />
<h3 style='display: inline' id='step-3-fetch-the-complete-trace'>Step 3: Fetch the complete trace</h3><br />
<br />
<pre>
$ curl -s "http://localhost:3200/api/traces/b3fc896a1cf32b425b8e8c46c86c76f7" \
  | jq &#39;[.batches[] | ... | {service, span}] | unique&#39;
[
  { "service": "ingestion-api", "span": "POST /ingest" },
  { "service": "ingestion-api", "span": "storage.upload" },
  { "service": "ingestion-api", "span": "messaging.publish" },
  { "service": "indexer", "span": "indexer.process_document" },
  { "service": "indexer", "span": "document.duplicate_check" },
  { "service": "indexer", "span": "document.pipeline" },
  { "service": "indexer", "span": "storage.download" },
  { "service": "indexer", "span": "/xrag.embedding.EmbeddingService/EmbedBatch" },
  { "service": "embedding-service", "span": "openai.embeddings" },
  { "service": "indexer", "span": "db.insert" }
]
</pre>
<br />
<span>The trace spans <span class='inlinecode'>three services</span>: ingestion-api, indexer, and embedding-service. The trace context propagates through Kafka, linking the original HTTP request to the async consumer processing.</span><br />
<br />
<h3 style='display: inline' id='step-4-analyse-the-async-trace'>Step 4: Analyse the async trace</h3><br />
<br />
<pre>
ingestion-api | POST /ingest             |   16ms  ← HTTP response returns
ingestion-api | storage.upload           |   13ms  ← Save to MinIO
ingestion-api | messaging.publish        |    1ms  ← Publish to Kafka
              |                          |         
              | ~~~ Kafka queue ~~~      |         ← Async boundary
              |                          |         
indexer       | indexer.process_document | 1799ms  ← Consumer picks up message
indexer       | document.duplicate_check |    1ms
indexer       | document.pipeline        | 1796ms
indexer       | storage.download         |    1ms  ← Fetch from MinIO
indexer       | EmbedBatch (gRPC)        |  754ms  ← Call embedding service
embedding-svc | openai.embeddings        |  752ms  ← OpenAI API
indexer       | db.insert                | 1038ms  ← Store in Weaviate
</pre>
<br />
<span>The total async processing takes ~1.8 seconds, but the user sees a 16ms response. Without tracing, debugging "why isn&#39;t my document showing up in search results?" would require correlating logs from three services manually.</span><br />
<br />
<span><span class='inlinecode'>Key insight</span>: The trace context propagates through Kafka message headers, allowing the indexer&#39;s spans to link back to the original ingestion request. This is configured via OpenTelemetry&#39;s Kafka instrumentation.</span><br />
<br />
<h3 style='display: inline' id='viewing-traces-in-grafana'>Viewing traces in Grafana</h3><br />
<br />
<span>To view a trace in Grafana&#39;s UI:</span><br />
<br />
<span>1. Open Grafana at http://localhost:3000/explore</span><br />
<span>2. Select <span class='inlinecode'>Tempo</span> as the data source (top-left dropdown)</span><br />
<span>3. Choose <span class='inlinecode'>TraceQL</span> as the query type</span><br />
<span>4. Paste the trace ID: <span class='inlinecode'>b3fc896a1cf32b425b8e8c46c86c76f7</span></span><br />
<span>5. Click <span class='inlinecode'>Run query</span></span><br />
<br />
<span>The trace viewer shows a Gantt chart with all spans, their timing, and parent-child relationships. Click any span to see its attributes.</span><br />
<br />
<a href='./x-rag-observability-hackathon/index-trace.png'><img alt='Async ingestion trace in Grafana Tempo' title='Async ingestion trace in Grafana Tempo' src='./x-rag-observability-hackathon/index-trace.png' /></a><br />
<br />
<a href='./x-rag-observability-hackathon/index-node-graph.png'><img alt='Ingestion trace node graph showing service dependencies' title='Ingestion trace node graph showing service dependencies' src='./x-rag-observability-hackathon/index-node-graph.png' /></a><br />
<br />
<h2 style='display: inline' id='end-to-end-search-trace-walkthrough'>End-to-end search trace walkthrough</h2><br />
<br />
<span>To demonstrate the observability stack in action, here&#39;s a complete trace from a search request through all services.</span><br />
<br />
<h3 style='display: inline' id='step-1-make-a-search-request'>Step 1: Make a search request</h3><br />
<br />
<span>Normally you&#39;d use the Search UI web interface at http://localhost:8080, but for demonstration purposes curl makes it easier to show the raw request and response:</span><br />
<br />
<pre>
$ curl -s -X POST http://localhost:8080/api/search \
  -H "Content-Type: application/json" \
  -d &#39;{"query": "What is RAG?", "namespace": "default", "mode": "hybrid", "top_k": 5}&#39; | jq .
{
  "answer": "I don&#39;t have enough information to answer this question.",
  "sources": [
    {
      "id": "71adbc34-56c1-4f75-9248-4ed38094ac69",
      "content": "# X-RAG Observability Guide This document describes...",
      "score": 0.8292956352233887,
      "metadata": {
        "source": "docs/OBSERVABILITY.md",
        "type": "markdown",
        "namespace": "default"
      }
    }
  ],
  "metadata": {
    "namespace": "default",
    "num_sources": "5",
    "cache_hit": "False",
    "mode": "hybrid",
    "top_k": "5",
    "trace_id": "9df981cac91857b228eca42b501c98c6"
  }
}
</pre>
<br />
<span>The response includes a <span class='inlinecode'>trace_id</span> that links this request to all spans across services.</span><br />
<br />
<h3 style='display: inline' id='step-2-query-tempo-for-the-trace'>Step 2: Query Tempo for the trace</h3><br />
<br />
<span>Using the trace ID from the response, query Tempo&#39;s API:</span><br />
<br />
<pre>
$ curl -s "http://localhost:3200/api/traces/9df981cac91857b228eca42b501c98c6" \
  | jq &#39;.batches[].scopeSpans[].spans[] 
        | {name, service: .attributes[] 
           | select(.key=="service.name") 
           | .value.stringValue}&#39;
</pre>
<br />
<span>The raw trace shows spans from multiple services:</span><br />
<br />
<ul>
<li><span class='inlinecode'>search-ui</span>: <span class='inlinecode'>POST /api/search</span> (root span, 2138ms total)</li>
<li><span class='inlinecode'>search-ui</span>: <span class='inlinecode'>/xrag.search.SearchService/Search</span> (gRPC client call)</li>
<li><span class='inlinecode'>search-service</span>: <span class='inlinecode'>/xrag.search.SearchService/Search</span> (gRPC server)</li>
<li><span class='inlinecode'>search-service</span>: <span class='inlinecode'>/xrag.embedding.EmbeddingService/Embed</span> (gRPC client)</li>
<li><span class='inlinecode'>embedding-service</span>: <span class='inlinecode'>/xrag.embedding.EmbeddingService/Embed</span> (gRPC server)</li>
<li><span class='inlinecode'>embedding-service</span>: <span class='inlinecode'>openai.embeddings</span> (OpenAI API call, 647ms)</li>
<li><span class='inlinecode'>embedding-service</span>: <span class='inlinecode'>POST https://api.openai.com/v1/embeddings</span> (HTTP client)</li>
<li><span class='inlinecode'>search-service</span>: <span class='inlinecode'>vector_search.query</span> (Weaviate hybrid search, 13ms)</li>
<li><span class='inlinecode'>search-service</span>: <span class='inlinecode'>openai.chat</span> (LLM answer generation, 1468ms)</li>
<li><span class='inlinecode'>search-service</span>: <span class='inlinecode'>POST https://api.openai.com/v1/chat/completions</span> (HTTP client)</li>
</ul><br />
<h3 style='display: inline' id='step-3-analyse-the-trace'>Step 3: Analyse the trace</h3><br />
<br />
<span>From this single trace, I can see exactly where time is spent:</span><br />
<br />
<pre>
Total request:                     2138ms
├── gRPC to search-service:        2135ms
│   ├── Embedding generation:       649ms
│   │   └── OpenAI embeddings API:   640ms
│   ├── Vector search (Weaviate):    13ms
│   └── LLM answer generation:     1468ms
│       └── OpenAI chat API:       1463ms
</pre>
<br />
<span>The bottleneck is clear: <span class='inlinecode'>68% of time is spent in LLM answer generation</span>. The vector search (13ms) and embedding generation (649ms) are relatively fast. Without tracing, I would have guessed the embedding service was slow—traces proved otherwise.</span><br />
<br />
<h3 style='display: inline' id='step-4-search-traces-with-traceql'>Step 4: Search traces with TraceQL</h3><br />
<br />
<span>Tempo supports TraceQL for querying traces by attributes:</span><br />
<br />
<pre>
$ curl -s -G "http://localhost:3200/api/search" \
  --data-urlencode &#39;q={resource.service.name="search-service"}&#39; \
  --data-urlencode &#39;limit=5&#39; | jq &#39;.traces[:2] | .[].rootTraceName&#39;
"/xrag.search.SearchService/Search"
"GET /health/ready"
</pre>
<br />
<span>Other useful TraceQL queries:</span><br />
<br />
<pre>
# Find slow searches (&gt; 2 seconds)
{resource.service.name="search-ui" &amp;&amp; name="POST /api/search"} | duration &gt; 2s

# Find errors
{status=error}

# Find OpenAI calls
{name=~"openai.*"}
</pre>
<br />
<h3 style='display: inline' id='viewing-the-search-trace-in-grafana'>Viewing the search trace in Grafana</h3><br />
<br />
<span>Follow the same steps as above, but use the search trace ID: <span class='inlinecode'>9df981cac91857b228eca42b501c98c6</span></span><br />
<br />
<a href='./x-rag-observability-hackathon/search-trace.png'><img alt='Search trace in Grafana Tempo' title='Search trace in Grafana Tempo' src='./x-rag-observability-hackathon/search-trace.png' /></a><br />
<br />
<a href='./x-rag-observability-hackathon/search-node-graph.png'><img alt='Search trace node graph showing service flow' title='Search trace node graph showing service flow' src='./x-rag-observability-hackathon/search-node-graph.png' /></a><br />
<br />
<h2 style='display: inline' id='correlating-the-three-signals'>Correlating the three signals</h2><br />
<br />
<span>The real power comes from correlating traces, metrics, and logs. When an alert fires for high error rate, I follow this workflow:</span><br />
<br />
<span>1. Metrics: Prometheus shows error spike started at 10:23:00</span><br />
<span>2. Traces: Query Tempo for traces with status=error around that time</span><br />
<span>3. Logs: Use the trace ID to find detailed error messages in Loki</span><br />
<br />
<pre>
{namespace="rag-system"} |= "trace_id=abc123" |= "error"
</pre>
<br />
<span>Prometheus exemplars link specific metric samples to trace IDs, so I can click directly from a latency spike to the responsible trace.</span><br />
<br />
<h2 style='display: inline' id='grafana-dashboards'>Grafana dashboards</h2><br />
<br />
<span>During the hackathon, I also created six pre-built Grafana dashboards that are automatically provisioned when the monitoring stack starts:</span><br />
<br />
<span>| Dashboard | Description |</span><br />
<span>|-----------|-------------|</span><br />
<span>| **X-RAG Overview** | The main dashboard with 22 panels covering request rates, latencies, error rates, and service health across all X-RAG components |</span><br />
<span>| **OpenTelemetry HTTP Metrics** | HTTP request/response metrics from OpenTelemetry-instrumented services—request rates, latency percentiles, and status code breakdowns |</span><br />
<span>| **Pod System Metrics** | Kubernetes pod resource utilisation: CPU usage, memory consumption, network I/O, disk I/O, and pod state from kube-state-metrics |</span><br />
<span>| **Redis** | Cache performance: memory usage, hit/miss rates, commands per second, connected clients, and memory fragmentation |</span><br />
<span>| **Kafka** | Message queue health: consumer lag (critical for indexer monitoring), broker status, topic partitions, and throughput |</span><br />
<span>| **MinIO** | Object storage metrics: S3 request rates, error counts, traffic volume, bucket sizes, and disk usage |</span><br />
<br />
<span>All dashboards are stored as JSON files in <span class='inlinecode'>infra/k8s/monitoring/grafana-dashboards/</span> and deployed via ConfigMaps, so they survive pod restarts and cluster recreations.</span><br />
<br />
<a href='./x-rag-observability-hackathon/dashboard-xrag-overview.png'><img alt='X-RAG Overview dashboard' title='X-RAG Overview dashboard' src='./x-rag-observability-hackathon/dashboard-xrag-overview.png' /></a><br />
<a href='./x-rag-observability-hackathon/dashboard-pod-system-metrics.png'><img alt='Pod System Metrics dashboard' title='Pod System Metrics dashboard' src='./x-rag-observability-hackathon/dashboard-pod-system-metrics.png' /></a><br />
<br />
<h2 style='display: inline' id='results-two-days-well-spent'>Results: two days well spent</h2><br />
<br />
<span>What did two days of hackathon work achieve? The system went from flying blind to fully instrumented:</span><br />
<br />
<ul>
<li>All three pillars implemented: logs (Loki), metrics (Prometheus), traces (Tempo)</li>
<li>Unified collection via Grafana Alloy</li>
<li>Infrastructure metrics for Kafka, Redis, and MinIO</li>
<li>Six pre-built Grafana dashboards covering application metrics, pod resources, and infrastructure</li>
<li>Trace context propagation across all gRPC calls</li>
</ul><br />
<span>The biggest insight from testing? The embedding service wasn&#39;t the bottleneck I assumed. Traces revealed that LLM synthesis dominated latency, not embedding generation. Without tracing, optimisation efforts would have targeted the wrong component.</span><br />
<br />
<span>Beyond the technical wins, I had a lot of fun. The hackathon brought together people working on different projects, and I got to know some really nice folks during the sessions themselves. There&#39;s something energising about being in a (virtual) room with other people all heads-down on their own challenges—even if you&#39;re not collaborating directly, the shared focus is motivating.</span><br />
<br />
<h2 style='display: inline' id='slis-slos-and-slas'>SLIs, SLOs and SLAs</h2><br />
<br />
<span>The system now has full observability, but there&#39;s always more. And to be clear: this is not production-grade yet. It works well for development and could scale to production, but that would need to be validated with proper load testing and chaos testing first. We haven&#39;t stress-tested the observability pipeline under heavy load, nor have we tested failure scenarios like Tempo going down or Alloy running out of memory. The Alloy config includes comments on sampling strategies and rate limiting that would be essential for high-traffic environments.</span><br />
<br />
<span>One thing we didn&#39;t cover: monitoring and alerting. These are related but distinct from observability. Observability is about collecting and exploring data to understand system behaviour. Monitoring is about defining thresholds and alerting when they&#39;re breached. We have Prometheus with all the metrics, but no alerting rules yet—no PagerDuty integration, no Slack notifications when latency spikes or error rates climb.</span><br />
<br />
<span>We also didn&#39;t define any SLIs (Service Level Indicators) or SLOs (Service Level Objectives). An SLI is a quantitative measure of service quality—for example, "99th percentile search latency" or "percentage of requests returning successfully." An SLO is a target for that indicator—"99th percentile latency should be under 2 seconds" or "99.9% of requests should succeed." Without SLOs, you don&#39;t know what "good" looks like, and alerting becomes arbitrary.</span><br />
<br />
<span>For X-RAG specifically, potential SLOs might include:</span><br />
<br />
<ul>
<li><span class='inlinecode'>Search latency</span>: 99th percentile over 5 minutes search response time under 3 seconds</li>
<li><span class='inlinecode'>Uptime</span>: 99.9% availability of the search API endpoint</li>
<li><span class='inlinecode'>Response quality</span>: How good was the search? There are some metrics which could be used...</li>
</ul><br />
<span>SLAs (Service Level Agreements) are often confused with SLOs, but they&#39;re different. An SLA is a contractual commitment to customers—a legally binding promise with consequences (refunds, credits, penalties) if you fail to meet it. SLOs are internal engineering targets; SLAs are external business promises. Typically, SLAs are less strict than SLOs: if your internal target is 99.9% availability (SLO), your customer contract might promise 99.5% (SLA), giving you a buffer before you owe anyone money.</span><br />
<br />
<span>But then again, X-RAG is a proof-of-concept, a prototype, a learning system—there are no real customers to disappoint. SLOs would become essential if this ever served actual users, and SLAs would follow once there&#39;s a business relationship to protect.</span><br />
<br />
<h2 style='display: inline' id='using-amp-for-ai-assisted-development'>Using Amp for AI-assisted development</h2><br />
<br />
<span>I used Amp (formerly Ampcode) throughout this project. While I knew what I wanted to achieve, I let the LLM generate the actual configurations, Kubernetes manifests, and Python instrumentation code.</span><br />
<br />
<a class='textlink' href='https://ampcode.com/'>Amp - AI coding agent by Sourcegraph</a><br />
<br />
<span>My workflow was step-by-step rather than handing over a grand plan:</span><br />
<br />
<span>1. "Deploy Grafana Alloy to the monitoring namespace"</span><br />
<span>2. "Verify Alloy is running and receiving data"</span><br />
<span>3. "Document what we did to docs/OBSERVABILITY.md"</span><br />
<span>4. "Commit with message &#39;feat: add Grafana Alloy for telemetry collection&#39;"</span><br />
<span>5. Hand off context, start fresh: "Now instrument the search-ui with OpenTelemetry to push traces to Alloy..."</span><br />
<br />
<span>Chaining many small, focused tasks worked better than one massive plan. Each task had clear success criteria, and I could verify results before moving on. The LLM generated the River configuration, the OpenTelemetry Python code, the Kubernetes manifests—I reviewed, tweaked, and committed.</span><br />
<br />
<span>I only ran out of the 200k token context window once, during a debugging session that involved restarting the Kubernetes cluster multiple times. The fix required correlating error messages across several services, and the conversation history grew too long. Starting a fresh context and summarising the problem solved it.</span><br />
<br />
<span>Amp automatically selects the best model for the task at hand. Based on the response speed and Sourcegraph&#39;s recent announcements, I believe it was using Claude Opus 4.5 for most of my coding and infrastructure work. The quality was excellent—it understood Python, Kubernetes, OpenTelemetry, and Grafana tooling without much hand-holding.</span><br />
<br />
<span>Let me be clear: without the LLM, I&#39;d never have managed to write all these configuration files by hand in two days. The Alloy config alone is 1400+ lines. But I also reviewed and verified every change manually, verified it made sense, and understood what was being deployed. This wasn&#39;t vibe-coding—the whole point of the hackathon was to learn. I already knew Grafana and Prometheus from previous work, but OpenTelemetry, Alloy, Tempo, Loki and the X-RAG system overall were all pretty new to me. By reviewing each generated config and understanding why it was structured that way, I actually learned the tools rather than just deploying magic incantations.</span><br />
<br />
<span>Cost-wise, I spent around 20 USD on Amp credits over the two-day hackathon. For the amount of code generated, configs reviewed, and debugging assistance—that&#39;s remarkably affordable.</span><br />
<br />
<h2 style='display: inline' id='other-changes-along-the-way'>Other changes along the way</h2><br />
<br />
<span>Looking at the git history, I made 25 commits during the hackathon. Beyond the main observability features, there were several smaller but useful additions:</span><br />
<br />
<span><span class='inlinecode'>OBSERVABILITY_ENABLED flag</span>: Added an environment variable to completely disable the monitoring stack. Set <span class='inlinecode'>OBSERVABILITY_ENABLED=false</span> in <span class='inlinecode'>.env</span> and the cluster starts without Prometheus, Grafana, Tempo, Loki, or Alloy. Useful when you just want to work on application code without the overhead.</span><br />
<br />
<span><span class='inlinecode'>Load generator</span>: Added a <span class='inlinecode'>make load-gen</span> target that fires concurrent requests at the search API. Useful for generating enough trace data to see patterns in Tempo, and for stress-testing the observability pipeline itself.</span><br />
<br />
<span><span class='inlinecode'>Verification scripts</span>: Created scripts to test that OTLP is actually reaching Alloy and that traces appear in Tempo. Debugging "why aren&#39;t my traces showing up?" is frustrating without a systematic way to verify each hop in the pipeline.</span><br />
<br />
<span><span class='inlinecode'>Moving monitoring to dedicated namespace</span>: Refactored from having observability components scattered across namespaces to a clean <span class='inlinecode'>monitoring</span> namespace. Makes <span class='inlinecode'>kubectl get pods -n monitoring</span> show exactly what&#39;s running for observability.</span><br />
<br />
<h2 style='display: inline' id='lessons-learned'>Lessons learned</h2><br />
<br />
<ul>
<li>Start with metrics, but don&#39;t stop there—they tell you *what*, not *why*</li>
<li>Trace context propagation is the key to distributed debugging</li>
<li>Grafana Alloy as a unified collector simplifies the pipeline</li>
<li>Infrastructure metrics matter—your app is only as fast as your data layer</li>
<li>The three pillars work together; none is sufficient alone</li>
</ul><br />
<span>All manifests and observability code live in Florian&#39;s repository:</span><br />
<br />
<a class='textlink' href='https://github.com/florianbuetow/x-rag'>X-RAG on GitHub (source code, K8s manifests, observability configs)</a><br />
<br />
<span>The best part? Everything I learned during this hackathon—OpenTelemetry instrumentation, Grafana Alloy configuration, trace context propagation, PromQL queries—I can immediately apply at work as we are shifting to that new observability stack and I am going to have a few meetings talking with developers how and what they need to implement for application instrumentalization. Observability patterns are universal, and hands-on experience with a real distributed system beats reading documentation any day.</span><br />
<br />
<span>E-Mail your comments to paul@nospam.buetow.org</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</title>
        <link href="https://foo.zone/gemfeed/2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html" />
        <id>https://foo.zone/gemfeed/2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html</id>
        <updated>2025-12-14T20:00:00+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is a follow-up to Part 8 of the f3s series, where I covered Prometheus, Grafana, Loki, and Alloy. Now it's time for the last pillar of observability: distributed tracing with Grafana Tempo.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-8b-distributed-tracing-with-tempo'>f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</h1><br />
<br />
<span class='quote'>Published at 2025-12-14T20:00:00+02:00</span><br />
<br />
<span>This is a follow-up to Part 8 of the f3s series, where I covered Prometheus, Grafana, Loki, and Alloy. Now it&#39;s time for the last pillar of observability: distributed tracing with Grafana Tempo.</span><br />
<br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>Part 8: Observability (Prometheus, Grafana, Loki, Alloy)</a><br />
<br />
<span>For a preview of what distributed tracing with Tempo looks like in Grafana, check out the X-RAG blog post:</span><br />
<br />
<a class='textlink' href='./2025-12-24-x-rag-observability-hackathon.html'>X-RAG Observability Hackathon</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-8b-distributed-tracing-with-tempo'>f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a></li>
<li>⇢ <a href='#why-distributed-tracing'>Why Distributed Tracing?</a></li>
<li>⇢ <a href='#deploying-grafana-tempo'>Deploying Grafana Tempo</a></li>
<li>⇢ ⇢ <a href='#tempo-helm-values'>Tempo Helm Values</a></li>
<li>⇢ ⇢ <a href='#persistent-volumes'>Persistent Volumes</a></li>
<li>⇢ ⇢ <a href='#grafana-datasource-provisioning'>Grafana Datasource Provisioning</a></li>
<li>⇢ ⇢ <a href='#installation'>Installation</a></li>
<li>⇢ <a href='#configuring-alloy-for-trace-collection'>Configuring Alloy for Trace Collection</a></li>
<li>⇢ <a href='#demo-tracing-application'>Demo Tracing Application</a></li>
<li>⇢ ⇢ <a href='#architecture'>Architecture</a></li>
<li>⇢ ⇢ <a href='#opentelemetry-instrumentation'>OpenTelemetry Instrumentation</a></li>
<li>⇢ ⇢ <a href='#deployment'>Deployment</a></li>
<li>⇢ <a href='#visualizing-traces-in-grafana'>Visualizing Traces in Grafana</a></li>
<li>⇢ ⇢ <a href='#searching-for-traces'>Searching for Traces</a></li>
<li>⇢ ⇢ <a href='#service-graph'>Service Graph</a></li>
<li>⇢ <a href='#practical-example-end-to-end-trace'>Practical Example: End-to-End Trace</a></li>
<li>⇢ <a href='#correlation-between-signals'>Correlation Between Signals</a></li>
<li>⇢ <a href='#storage-and-retention'>Storage and Retention</a></li>
<li>⇢ <a href='#configuration-files'>Configuration Files</a></li>
</ul><br />
<h2 style='display: inline' id='why-distributed-tracing'>Why Distributed Tracing?</h2><br />
<br />
<span>In a microservices setup, a single user request can hop through multiple services. Tracing gives you:</span><br />
<br />
<ul>
<li>Request tracking across service boundaries</li>
<li>Performance bottleneck identification</li>
<li>Service dependency visualization</li>
<li>Correlation with logs and metrics</li>
</ul><br />
<span>Without it, you&#39;re basically guessing where time gets spent.</span><br />
<br />
<h2 style='display: inline' id='deploying-grafana-tempo'>Deploying Grafana Tempo</h2><br />
<br />
<span>Tempo runs in monolithic mode — all components in one process, same pattern as Loki&#39;s SingleBinary deployment. Keeps things simple for a home lab.</span><br />
<br />
<span>The setup:</span><br />
<br />
<ul>
<li>Filesystem backend using hostPath (10Gi at <span class='inlinecode'>/data/nfs/k3svolumes/tempo/data</span>)</li>
<li>7-day retention (168h)</li>
<li>OTLP receivers on gRPC (4317) and HTTP (4318)</li>
<li>Bind to <span class='inlinecode'>0.0.0.0</span> to avoid Tempo 2.7+ localhost-only binding issue</li>
</ul><br />
<h3 style='display: inline' id='tempo-helm-values'>Tempo Helm Values</h3><br />
<br />
<pre>
tempo:
  retention: 168h
  storage:
    trace:
      backend: local
      local:
        path: /var/tempo/traces
      wal:
        path: /var/tempo/wal
  receivers:
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318

persistence:
  enabled: true
  size: 10Gi
  storageClassName: ""

resources:
  limits:
    cpu: 1000m
    memory: 2Gi
  requests:
    cpu: 500m
    memory: 1Gi
</pre>
<br />
<h3 style='display: inline' id='persistent-volumes'>Persistent Volumes</h3><br />
<br />
<pre>
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tempo-data-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/nfs/k3svolumes/tempo/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tempo-data-pvc
  namespace: monitoring
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
</pre>
<br />
<h3 style='display: inline' id='grafana-datasource-provisioning'>Grafana Datasource Provisioning</h3><br />
<br />
<span>All Grafana datasources (Prometheus, Alertmanager, Loki, Tempo) are provisioned via a single ConfigMap mounted directly to the Grafana pod. No sidecar discovery needed.</span><br />
<br />
<span>In <span class='inlinecode'>grafana-datasources-all.yaml</span>:</span><br />
<br />
<pre>
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources-all
  namespace: monitoring
data:
  datasources.yaml: |
    apiVersion: 1
    datasources:
      - name: Prometheus
        type: prometheus
        uid: prometheus
        url: http://prometheus-kube-prometheus-prometheus.monitoring:9090/
        access: proxy
        isDefault: true
      - name: Alertmanager
        type: alertmanager
        uid: alertmanager
        url: http://prometheus-kube-prometheus-alertmanager.monitoring:9093/
      - name: Loki
        type: loki
        uid: loki
        url: http://loki.monitoring.svc.cluster.local:3100
      - name: Tempo
        type: tempo
        uid: tempo
        url: http://tempo.monitoring.svc.cluster.local:3200
        jsonData:
          tracesToLogsV2:
            datasourceUid: loki
            spanStartTimeShift: -1h
            spanEndTimeShift: 1h
          tracesToMetrics:
            datasourceUid: prometheus
          serviceMap:
            datasourceUid: prometheus
          nodeGraph:
            enabled: true
</pre>
<br />
<span>The Tempo datasource config links traces to Loki logs and Prometheus metrics — so you can jump between signals directly in Grafana.</span><br />
<br />
<span>The kube-prometheus-stack Helm values disable sidecar-based discovery and mount this ConfigMap directly to <span class='inlinecode'>/etc/grafana/provisioning/datasources/</span>.</span><br />
<br />
<h3 style='display: inline' id='installation'>Installation</h3><br />
<br />
<pre>
cd /home/paul/git/conf/f3s/tempo
just install
</pre>
<br />
<span>Verify it&#39;s running:</span><br />
<br />
<pre>
kubectl get pods -n monitoring -l app.kubernetes.io/name=tempo
kubectl exec -n monitoring &lt;tempo-pod&gt; -- wget -qO- http://localhost:3200/ready
</pre>
<br />
<h2 style='display: inline' id='configuring-alloy-for-trace-collection'>Configuring Alloy for Trace Collection</h2><br />
<br />
<span>I updated the Alloy values to add OTLP receivers for traces alongside the existing log collection.</span><br />
<br />
<span>Added to the Alloy config:</span><br />
<br />
<pre>
// OTLP receiver for traces via gRPC and HTTP
otelcol.receiver.otlp "default" {
  grpc {
    endpoint = "0.0.0.0:4317"
  }
  http {
    endpoint = "0.0.0.0:4318"
  }
  output {
    traces = [otelcol.processor.batch.default.input]
  }
}

// Batch processor — accumulates spans before forwarding to Tempo
otelcol.processor.batch "default" {
  timeout = "5s"
  send_batch_size = 100
  send_batch_max_size = 200
  output {
    traces = [otelcol.exporter.otlp.tempo.input]
  }
}

// OTLP exporter to Tempo
otelcol.exporter.otlp "tempo" {
  client {
    endpoint = "tempo.monitoring.svc.cluster.local:4317"
    tls {
      insecure = true
    }
    compression = "gzip"
  }
}
</pre>
<br />
<span>Upgrade Alloy:</span><br />
<br />
<pre>
cd /home/paul/git/conf/f3s/loki
just upgrade
</pre>
<br />
<h2 style='display: inline' id='demo-tracing-application'>Demo Tracing Application</h2><br />
<br />
<span>To actually see traces, I built a three-tier Python app. Nothing fancy — just enough to generate real distributed traces.</span><br />
<br />
<h3 style='display: inline' id='architecture'>Architecture</h3><br />
<br />
<pre>
User -&gt; Frontend (Flask:5000) -&gt; Middleware (Flask:5001) -&gt; Backend (Flask:5002)
           |                          |                        |
                    Alloy (OTLP:4317) -&gt; Tempo -&gt; Grafana
</pre>
<br />
<ul>
<li>Frontend: receives requests at <span class='inlinecode'>/api/process</span>, forwards to middleware</li>
<li>Middleware: transforms data at <span class='inlinecode'>/api/transform</span>, calls backend</li>
<li>Backend: returns data at <span class='inlinecode'>/api/data</span>, simulates a 100ms database query</li>
</ul><br />
<h3 style='display: inline' id='opentelemetry-instrumentation'>OpenTelemetry Instrumentation</h3><br />
<br />
<span>All three services use Python OpenTelemetry libraries:</span><br />
<br />
<span>Dependencies:</span><br />
<br />
<pre>
flask==3.0.0
requests==2.31.0
opentelemetry-distro==0.49b0
opentelemetry-exporter-otlp==1.28.0
opentelemetry-instrumentation-flask==0.49b0
opentelemetry-instrumentation-requests==0.49b0
</pre>
<br />
<span>Auto-instrumentation pattern (same across all services, just change the service name):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">from</font></u></b> opentelemetry <b><u><font color="#000000">import</font></u></b> trace
<b><u><font color="#000000">from</font></u></b> opentelemetry.sdk.trace <b><u><font color="#000000">import</font></u></b> TracerProvider
<b><u><font color="#000000">from</font></u></b> opentelemetry.exporter.otlp.proto.grpc.trace_exporter <b><u><font color="#000000">import</font></u></b> OTLPSpanExporter
<b><u><font color="#000000">from</font></u></b> opentelemetry.instrumentation.flask <b><u><font color="#000000">import</font></u></b> FlaskInstrumentor
<b><u><font color="#000000">from</font></u></b> opentelemetry.instrumentation.requests <b><u><font color="#000000">import</font></u></b> RequestsInstrumentor
<b><u><font color="#000000">from</font></u></b> opentelemetry.sdk.resources <b><u><font color="#000000">import</font></u></b> Resource

resource = Resource(attributes={
    <font color="#808080">"service.name"</font>: <font color="#808080">"frontend"</font>,
    <font color="#808080">"service.namespace"</font>: <font color="#808080">"tracing-demo"</font>,
    <font color="#808080">"service.version"</font>: <font color="#808080">"1.0.0"</font>
})

provider = TracerProvider(resource=resource)

otlp_exporter = OTLPSpanExporter(
    endpoint=<font color="#808080">"http://alloy.monitoring.svc.cluster.local:4317"</font>,
    insecure=True
)

processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)

FlaskInstrumentor().instrument_app(app)
RequestsInstrumentor().instrument()
</pre>
<br />
<span>The auto-instrumentation creates spans for HTTP requests, propagates trace context via W3C headers, and links parent/child spans across services automatically.</span><br />
<br />
<h3 style='display: inline' id='deployment'>Deployment</h3><br />
<br />
<span>The demo app has a Helm chart in the conf repo. Build, import the container images, and install:</span><br />
<br />
<pre>
cd /home/paul/git/conf/f3s/tracing-demo
just build
just import
just install
</pre>
<br />
<span>Verify:</span><br />
<br />
<pre>
kubectl get pods -n services | grep tracing-demo
kubectl get ingress -n services tracing-demo-ingress
</pre>
<br />
<span>Access at:</span><br />
<br />
<a class='textlink' href='http://tracing-demo.f3s.foo.zone'>http://tracing-demo.f3s.foo.zone</a><br />
<br />
<h2 style='display: inline' id='visualizing-traces-in-grafana'>Visualizing Traces in Grafana</h2><br />
<br />
<h3 style='display: inline' id='searching-for-traces'>Searching for Traces</h3><br />
<br />
<span>In Grafana, go to Explore, select the Tempo datasource, and you can search by trace ID, service name, or tags.</span><br />
<br />
<span>Some useful TraceQL queries:</span><br />
<br />
<span>Find all traces from the demo app:</span><br />
<pre>
{ resource.service.namespace = "tracing-demo" }
</pre>
<br />
<span>Find slow requests (&gt;200ms):</span><br />
<pre>
{ duration &gt; 200ms }
</pre>
<br />
<span>Find traces from a specific service:</span><br />
<pre>
{ resource.service.name = "frontend" }
</pre>
<br />
<span>Find errors:</span><br />
<pre>
{ status = error }
</pre>
<br />
<span>Frontend traces with server errors:</span><br />
<pre>
{ resource.service.namespace = "tracing-demo" } &amp;&amp; { span.http.status_code &gt;= 500 }
</pre>
<br />
<h3 style='display: inline' id='service-graph'>Service Graph</h3><br />
<br />
<span>The service graph view shows visual connections between services — Frontend to Middleware to Backend — with request rates and latencies. It&#39;s generated automatically from trace data using Prometheus metrics.</span><br />
<br />
<h2 style='display: inline' id='practical-example-end-to-end-trace'>Practical Example: End-to-End Trace</h2><br />
<br />
<span>Here&#39;s what it looks like to generate and examine a trace.</span><br />
<br />
<span>Generate a trace:</span><br />
<br />
<pre>
curl -H "Host: tracing-demo.f3s.foo.zone" http://r0/api/process
</pre>
<br />
<span>Response (HTTP 200):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>{
  "middleware_response": {
    "backend_data": {
      "data": {
        "id": <font color="#000000">12345</font>,
        "query_time_ms": <font color="#000000">100.0</font>,
        "timestamp": "<font color="#808080">2025-12-28T18:35:01.064538</font>",
        "value": "<font color="#808080">Sample data from backend service</font>"
      },
      "service": "<font color="#808080">backend</font>"
    },
    "middleware_processed": <b><u><font color="#000000">true</font></u></b>,
    "original_data": {
      "source": "<font color="#808080">GET request</font>"
    },
    "transformation_time_ms": <font color="#000000">50</font>
  },
  "request_data": {
    "source": "<font color="#808080">GET request</font>"
  },
  "service": "<font color="#808080">frontend</font>",
  "status": "<font color="#808080">success</font>"
}
</pre>
<br />
<span>After a few seconds (batch export delay), search for traces via Tempo API:</span><br />
<br />
<pre>
kubectl exec -n monitoring tempo-0 -- wget -qO- \
  &#39;http://localhost:3200/api/search?tags=service.namespace%3Dtracing-demo&amp;limit=5&#39; 2&gt;/dev/null | \
  python3 -m json.tool
</pre>
<br />
<span>Returns something like:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>{
  "traceID": "<font color="#808080">4be1151c0bdcd5625ac7e02b98d95bd5</font>",
  "rootServiceName": "<font color="#808080">frontend</font>",
  "rootTraceName": "<font color="#808080">GET /api/process</font>",
  "durationMs": <font color="#000000">221</font>
}
</pre>
<br />
<span>The full trace has 8 spans across 3 services:</span><br />
<br />
<pre>
Trace ID: 4be1151c0bdcd5625ac7e02b98d95bd5

Service: frontend
  GET /api/process                 221.10ms  (HTTP server span)
  frontend-process                 216.23ms  (business logic)
  POST                             209.97ms  (HTTP client -&gt; middleware)

Service: middleware
  POST /api/transform              186.02ms  (HTTP server span)
  middleware-transform             180.96ms  (business logic)
  GET                              127.52ms  (HTTP client -&gt; backend)

Service: backend
  GET /api/data                    103.93ms  (HTTP server span)
  backend-get-data                 102.11ms  (business logic, 100ms sleep)
</pre>
<br />
<span>In Grafana, paste the trace ID in the Tempo search box or use TraceQL:</span><br />
<br />
<pre>
{ resource.service.namespace = "tracing-demo" }
</pre>
<br />
<span>The waterfall view shows the complete request flow with timing:</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-tempo-trace.png'><img alt='Distributed trace in Grafana Tempo: Frontend -&gt; Middleware -&gt; Backend' title='Distributed trace in Grafana Tempo: Frontend -&gt; Middleware -&gt; Backend' src='./f3s-kubernetes-with-freebsd-part-8/grafana-tempo-trace.png' /></a><br />
<br />
<span>More Tempo trace screenshots in the X-RAG blog post:</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-12-24-x-rag-observability-hackathon.html'>X-RAG Observability Hackathon</a><br />
<br />
<h2 style='display: inline' id='correlation-between-signals'>Correlation Between Signals</h2><br />
<br />
<span>This is where the observability stack really comes together. Tempo integrates with Loki and Prometheus so you can jump between traces, logs, and metrics.</span><br />
<br />
<span>Traces to logs: click on any span and select "Logs for this span." Loki filters by time range, service name, namespace, and pod. Super useful for figuring out what a service was doing during a specific request.</span><br />
<br />
<span>Traces to metrics: from a trace view, the "Metrics" tab shows Prometheus data like request rate, error rate, and duration percentiles for the services involved.</span><br />
<br />
<span>Logs to traces: in Loki, logs containing trace IDs are automatically linked. Click the trace ID and you jump straight to the full trace in Tempo.</span><br />
<br />
<h2 style='display: inline' id='storage-and-retention'>Storage and Retention</h2><br />
<br />
<span>With 10Gi storage and 7-day retention, the system handles moderate trace volumes. Check usage:</span><br />
<br />
<pre>
kubectl exec -n monitoring &lt;tempo-pod&gt; -- df -h /var/tempo
</pre>
<br />
<span>If storage fills up, you can reduce retention to 72h, add sampling in Alloy, or increase the PV size.</span><br />
<br />
<h2 style='display: inline' id='configuration-files'>Configuration Files</h2><br />
<br />
<span>All config files are on Codeberg:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/tempo'>Tempo configuration</a><br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/loki'>Alloy configuration (updated for traces)</a><br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/tracing-demo'>Demo tracing application</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo (You are currently reading this)</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 8: Observability</title>
        <link href="https://foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html" />
        <id>https://foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html</id>
        <updated>2025-12-06T23:58:24+02:00, last updated Mon 09 Mar 09:33:08 EET 2026</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the 8th blog post about the f3s series for my self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-8-observability'>f3s: Kubernetes with FreeBSD - Part 8: Observability</h1><br />
<br />
<span class='quote'>Published at 2025-12-06T23:58:24+02:00, last updated Mon 09 Mar 09:33:08 EET 2026</span><br />
<br />
<span>This is the 8th blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability (You are currently reading this)</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-8-observability'>f3s: Kubernetes with FreeBSD - Part 8: Observability</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#important-note-gitops-migration'>Important Note: GitOps Migration</a></li>
<li>⇢ <a href='#persistent-storage-recap'>Persistent storage recap</a></li>
<li>⇢ <a href='#the-monitoring-namespace'>The monitoring namespace</a></li>
<li>⇢ <a href='#installing-prometheus-and-grafana'>Installing Prometheus and Grafana</a></li>
<li>⇢ ⇢ <a href='#prerequisites'>Prerequisites</a></li>
<li>⇢ ⇢ <a href='#deploying-with-the-justfile'>Deploying with the Justfile</a></li>
<li>⇢ ⇢ <a href='#exposing-grafana-via-ingress'>Exposing Grafana via ingress</a></li>
<li>⇢ <a href='#installing-loki-and-alloy'>Installing Loki and Alloy</a></li>
<li>⇢ ⇢ <a href='#prerequisites'>Prerequisites</a></li>
<li>⇢ ⇢ <a href='#deploying-loki-and-alloy'>Deploying Loki and Alloy</a></li>
<li>⇢ ⇢ <a href='#configuring-alloy'>Configuring Alloy</a></li>
<li>⇢ ⇢ <a href='#adding-loki-as-a-grafana-data-source'>Adding Loki as a Grafana data source</a></li>
<li>⇢ <a href='#the-complete-monitoring-stack'>The complete monitoring stack</a></li>
<li>⇢ <a href='#using-the-observability-stack'>Using the observability stack</a></li>
<li>⇢ ⇢ <a href='#viewing-metrics-in-grafana'>Viewing metrics in Grafana</a></li>
<li>⇢ ⇢ <a href='#querying-logs-with-logql'>Querying logs with LogQL</a></li>
<li>⇢ ⇢ <a href='#creating-alerts'>Creating alerts</a></li>
<li>⇢ <a href='#monitoring-external-freebsd-hosts'>Monitoring external FreeBSD hosts</a></li>
<li>⇢ ⇢ <a href='#installing-node-exporter-on-freebsd'>Installing Node Exporter on FreeBSD</a></li>
<li>⇢ ⇢ <a href='#adding-freebsd-hosts-to-prometheus'>Adding FreeBSD hosts to Prometheus</a></li>
<li>⇢ ⇢ <a href='#freebsd-memory-metrics-compatibility'>FreeBSD memory metrics compatibility</a></li>
<li>⇢ ⇢ <a href='#disk-io-metrics-limitation'>Disk I/O metrics limitation</a></li>
<li>⇢ <a href='#zfs-monitoring-for-freebsd-servers'>ZFS Monitoring for FreeBSD Servers</a></li>
<li>⇢ ⇢ <a href='#node-exporter-zfs-collector'>Node Exporter ZFS Collector</a></li>
<li>⇢ ⇢ <a href='#verifying-zfs-metrics'>Verifying ZFS Metrics</a></li>
<li>⇢ ⇢ <a href='#zfs-recording-rules'>ZFS Recording Rules</a></li>
<li>⇢ ⇢ <a href='#grafana-dashboards'>Grafana Dashboards</a></li>
<li>⇢ ⇢ <a href='#deployment'>Deployment</a></li>
<li>⇢ ⇢ <a href='#verifying-zfs-metrics-in-prometheus'>Verifying ZFS Metrics in Prometheus</a></li>
<li>⇢ ⇢ <a href='#key-metrics-to-monitor'>Key Metrics to Monitor</a></li>
<li>⇢ ⇢ <a href='#zfs-pool-and-dataset-metrics-via-textfile-collector'>ZFS Pool and Dataset Metrics via Textfile Collector</a></li>
<li>⇢ <a href='#monitoring-external-openbsd-hosts'>Monitoring external OpenBSD hosts</a></li>
<li>⇢ ⇢ <a href='#installing-node-exporter-on-openbsd'>Installing Node Exporter on OpenBSD</a></li>
<li>⇢ ⇢ <a href='#adding-openbsd-hosts-to-prometheus'>Adding OpenBSD hosts to Prometheus</a></li>
<li>⇢ ⇢ <a href='#openbsd-memory-metrics-compatibility'>OpenBSD memory metrics compatibility</a></li>
<li>⇢ <a href='#summary'>Summary</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In this blog post, I set up a complete observability stack for the k3s cluster. Observability is crucial for understanding what&#39;s happening inside the cluster—whether its tracking resource usage, debugging issues, or analysing application behaviour. The stack consists of five main components, all deployed into the <span class='inlinecode'>monitoring</span> namespace:</span><br />
<br />
<ul>
<li>Prometheus: time-series database for metrics collection and alerting</li>
<li>Grafana: visualisation and dashboarding frontend</li>
<li>Loki: log aggregation system (like Prometheus, but for logs)</li>
<li>Alloy: telemetry collector that ships logs and traces from all pods to Loki and Tempo</li>
<li>Tempo: distributed tracing backend for request flow analysis across microservices</li>
</ul><br />
<span>Together, these form the "PLG" stack (Prometheus, Loki, Grafana) extended with Tempo for distributed tracing, which is a popular open-source alternative to commercial observability platforms.</span><br />
<br />
<span>All manifests for the f3s stack live in my configuration repository:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s'>codeberg.org/snonux/conf/f3s</a><br />
<br />
<h2 style='display: inline' id='important-note-gitops-migration'>Important Note: GitOps Migration</h2><br />
<br />
<span>**Note:** After publishing this blog post, the f3s cluster was migrated from imperative Helm deployments to declarative GitOps using ArgoCD. The Kubernetes manifests, Helm charts, and Justfiles in the repository have been reorganized for ArgoCD-based continuous deployment.</span><br />
<br />
<span>**To view the exact configuration as it existed when this blog post was written** (before the ArgoCD migration), check out the pre-ArgoCD revision:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ git clone https://codeberg.org/snonux/conf.git
$ cd conf
$ git checkout 15a86f3  <i><font color="silver"># Last commit before ArgoCD migration</font></i>
$ cd f3s/prometheus/
</pre>
<br />
<span>**Current master branch** contains the ArgoCD-managed versions with:</span><br />
<ul>
<li>Application manifests organized under <span class='inlinecode'>argocd-apps/{monitoring,services,infra,test}/</span></li>
<li>Resources organized under <span class='inlinecode'>prometheus/manifests/</span>, <span class='inlinecode'>loki/</span>, etc.</li>
<li>Justfiles updated to trigger ArgoCD syncs instead of direct Helm commands</li>
</ul><br />
<span>The deployment concepts and architecture remain the same—only the deployment method changed from imperative (<span class='inlinecode'>helm install/upgrade</span>) to declarative (GitOps with ArgoCD). </span><br />
<br />
<h2 style='display: inline' id='persistent-storage-recap'>Persistent storage recap</h2><br />
<br />
<span>All observability components need persistent storage so that metrics and logs survive pod restarts. As covered in Part 6 of this series, the cluster uses NFS-backed persistent volumes:</span><br />
<br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<br />
<span>The FreeBSD hosts (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>) serve as master-standby NFS servers, exporting ZFS datasets that are replicated across hosts using <span class='inlinecode'>zrepl</span>. The Rocky Linux k3s nodes (<span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>) mount these exports at <span class='inlinecode'>/data/nfs/k3svolumes</span>. This directory contains subdirectories for each application that needs persistent storage—including Prometheus, Grafana, and Loki.</span><br />
<br />
<span>For example, the observability stack uses these paths on the NFS share:</span><br />
<br />
<ul>
<li><span class='inlinecode'>/data/nfs/k3svolumes/prometheus/data</span> — Prometheus time-series database</li>
<li><span class='inlinecode'>/data/nfs/k3svolumes/grafana/data</span> — Grafana configuration, dashboards, and plugins</li>
<li><span class='inlinecode'>/data/nfs/k3svolumes/loki/data</span> — Loki log chunks and index</li>
<li><span class='inlinecode'>/data/nfs/k3svolumes/tempo/data</span> — Tempo trace data and WAL</li>
</ul><br />
<span>Each path gets a corresponding <span class='inlinecode'>PersistentVolume</span> and <span class='inlinecode'>PersistentVolumeClaim</span> in Kubernetes, allowing pods to mount them as regular volumes. Because the underlying storage is ZFS with replication, we get snapshots and redundancy for free.</span><br />
<br />
<h2 style='display: inline' id='the-monitoring-namespace'>The monitoring namespace</h2><br />
<br />
<span>First, I created the monitoring namespace where all observability components will live:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl create namespace monitoring
namespace/monitoring created
</pre>
<br />
<h2 style='display: inline' id='installing-prometheus-and-grafana'>Installing Prometheus and Grafana</h2><br />
<br />
<span>Prometheus and Grafana are deployed together using the <span class='inlinecode'>kube-prometheus-stack</span> Helm chart from the Prometheus community. This chart bundles Prometheus, Grafana, Alertmanager, and various exporters (Node Exporter, Kube State Metrics) into a single deployment. Ill explain what each component does in detail later when we look at the running pods.</span><br />
<br />
<h3 style='display: inline' id='prerequisites'>Prerequisites</h3><br />
<br />
<span>Add the Prometheus Helm chart repository:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
</pre>
<br />
<span>Create the directories on the NFS server for persistent storage:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/prometheus/data</font></i>
[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/grafana/data</font></i>
</pre>
<br />
<h3 style='display: inline' id='deploying-with-the-justfile'>Deploying with the Justfile</h3><br />
<br />
<span>The configuration repository contains a <span class='inlinecode'>Justfile</span> that automates the deployment. <span class='inlinecode'>just</span> is a handy command runner—think of it as a simpler, more modern alternative to <span class='inlinecode'>make</span>. I use it throughout the f3s repository to wrap repetitive Helm and kubectl commands:</span><br />
<br />
<a class='textlink' href='https://github.com/casey/just'>just - A handy way to save and run project-specific commands</a><br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus'>codeberg.org/snonux/conf/f3s/prometheus</a><br />
<br />
<span>To install everything:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd conf/f3s/prometheus
$ just install
kubectl apply -f persistent-volumes.yaml
persistentvolume/prometheus-data-pv created
persistentvolume/grafana-data-pv created
persistentvolumeclaim/grafana-data-pvc created
helm install prometheus prometheus-community/kube-prometheus-stack \
    --namespace monitoring -f persistence-values.yaml
NAME: prometheus
LAST DEPLOYED: ...
NAMESPACE: monitoring
STATUS: deployed
</pre>
<br />
<span>The <span class='inlinecode'>persistence-values.yaml</span> configures Prometheus and Grafana to use the NFS-backed persistent volumes I mentioned earlier, ensuring data survives pod restarts. It also enables scraping of etcd and kube-controller-manager metrics:</span><br />
<br />
<pre>
kubeEtcd:
  enabled: true
  endpoints:
    - 192.168.2.120
    - 192.168.2.121
    - 192.168.2.122
  service:
    enabled: true
    port: 2381
    targetPort: 2381

kubeControllerManager:
  enabled: true
  endpoints:
    - 192.168.2.120
    - 192.168.2.121
    - 192.168.2.122
  service:
    enabled: true
    port: 10257
    targetPort: 10257
  serviceMonitor:
    enabled: true
    https: true
    insecureSkipVerify: true
</pre>
<br />
<span>By default, k3s binds the controller-manager to localhost only and doesn&#39;t expose etcd metrics, so the "Kubernetes / Controller Manager" and "etcd" dashboards in Grafana will show no data. To fix both, add the following to <span class='inlinecode'>/etc/rancher/k3s/config.yaml</span> on each k3s server node:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># cat &gt;&gt; /etc/rancher/k3s/config.yaml &lt;&lt; 'EOF'</font></i>
kube-controller-manager-arg:
  - bind-address=<font color="#000000">0.0</font>.<font color="#000000">0.0</font>
etcd-expose-metrics: <b><u><font color="#000000">true</font></u></b>
EOF
[root@r0 ~]<i><font color="silver"># systemctl restart k3s</font></i>
</pre>
<br />
<span>Repeat for <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>. After restarting all nodes, the controller-manager metrics endpoint will be accessible and etcd metrics are available on port 2381. Prometheus can now scrape both.</span><br />
<br />
<span>Verify etcd metrics are exposed:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># curl -s http://127.0.0.1:2381/metrics | grep etcd_server_has_leader</font></i>
etcd_server_has_leader <font color="#000000">1</font>
</pre>
<br />
<span>The full <span class='inlinecode'>persistence-values.yaml</span> and all other Prometheus configuration files are available on Codeberg:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus'>codeberg.org/snonux/conf/f3s/prometheus</a><br />
<br />
<span>The persistent volume definitions bind to specific paths on the NFS share using <span class='inlinecode'>hostPath</span> volumes—the same pattern used for other services in Part 7:</span><br />
<br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<br />
<h3 style='display: inline' id='exposing-grafana-via-ingress'>Exposing Grafana via ingress</h3><br />
<br />
<span>The chart also deploys an ingress for Grafana, making it accessible at <span class='inlinecode'>grafana.f3s.foo.zone</span>. The ingress configuration follows the same pattern as other services in the cluster—Traefik handles the routing internally, while the OpenBSD edge relays terminate TLS and forward traffic through WireGuard.</span><br />
<br />
<span>Once deployed, Grafana is accessible and comes pre-configured with Prometheus as a data source. You can verify the Prometheus service is running:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get svc -n monitoring prometheus-kube-prometheus-prometheus
NAME                                    TYPE        CLUSTER-IP      PORT(S)
prometheus-kube-prometheus-prometheus   ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">152.163</font>   <font color="#000000">9090</font>/TCP,<font color="#000000">8080</font>/TCP
</pre>
<br />
<span>Grafana connects to Prometheus using the internal service URL <span class='inlinecode'>http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090</span>. The default Grafana credentials are <span class='inlinecode'>admin</span>/<span class='inlinecode'>prom-operator</span>, which should be changed immediately after first login.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-prometheus.png'><img alt='Grafana dashboard showing Prometheus metrics' title='Grafana dashboard showing Prometheus metrics' src='./f3s-kubernetes-with-freebsd-part-8/grafana-prometheus.png' /></a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-dashboard.png'><img alt='Grafana dashboard showing cluster metrics' title='Grafana dashboard showing cluster metrics' src='./f3s-kubernetes-with-freebsd-part-8/grafana-dashboard.png' /></a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-etcd-dashboard.png'><img alt='Grafana etcd dashboard showing cluster health, RPC rate, disk sync duration, and peer round trip times' title='Grafana etcd dashboard showing cluster health, RPC rate, disk sync duration, and peer round trip times' src='./f3s-kubernetes-with-freebsd-part-8/grafana-etcd-dashboard.png' /></a><br />
<br />
<h2 style='display: inline' id='installing-loki-and-alloy'>Installing Loki and Alloy</h2><br />
<br />
<span>While Prometheus handles metrics, Loki handles logs. It&#39;s designed to be cost-effective and easy to operate—it doesn&#39;t index the contents of logs, only the metadata (labels), making it very efficient for storage.</span><br />
<br />
<span>Alloy is Grafana&#39;s telemetry collector (the successor to Promtail). It runs as a DaemonSet on each node, tails container logs, and ships them to Loki.</span><br />
<br />
<h3 style='display: inline' id='prerequisites'>Prerequisites</h3><br />
<br />
<span>Create the data directory on the NFS server:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/loki/data</font></i>
</pre>
<br />
<h3 style='display: inline' id='deploying-loki-and-alloy'>Deploying Loki and Alloy</h3><br />
<br />
<span>The Loki configuration also lives in the repository:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/loki'>codeberg.org/snonux/conf/f3s/loki</a><br />
<br />
<span>To install:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd conf/f3s/loki
$ just install
helm repo add grafana https://grafana.github.io/helm-charts || <b><u><font color="#000000">true</font></u></b>
helm repo update
kubectl apply -f persistent-volumes.yaml
persistentvolume/loki-data-pv created
persistentvolumeclaim/loki-data-pvc created
helm install loki grafana/loki --namespace monitoring -f values.yaml
NAME: loki
LAST DEPLOYED: ...
NAMESPACE: monitoring
STATUS: deployed
...
helm install alloy grafana/alloy --namespace monitoring -f alloy-values.yaml
NAME: alloy
LAST DEPLOYED: ...
NAMESPACE: monitoring
STATUS: deployed
</pre>
<br />
<span>Loki runs in single-binary mode with a single replica (<span class='inlinecode'>loki-0</span>), which is appropriate for a home lab cluster. This means there&#39;s only one Loki pod running at any time. If the node hosting Loki fails, Kubernetes will automatically reschedule the pod to another worker node—but there will be a brief downtime (typically under a minute) while this happens. For my home lab use case, this is perfectly acceptable.</span><br />
<br />
<span>For full high-availability, you&#39;d deploy Loki in microservices mode with separate read, write, and backend components, backed by object storage like S3 or MinIO instead of local filesystem storage. That&#39;s a more complex setup that I might explore in a future blog post—but for now, the single-binary mode with NFS-backed persistence strikes the right balance between simplicity and durability.</span><br />
<br />
<h3 style='display: inline' id='configuring-alloy'>Configuring Alloy</h3><br />
<br />
<span>Alloy is configured via <span class='inlinecode'>alloy-values.yaml</span> to discover all pods in the cluster and forward their logs to Loki:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>discovery.kubernetes <font color="#808080">"pods"</font> {
  role = <font color="#808080">"pod"</font>
}

discovery.relabel <font color="#808080">"pods"</font> {
  targets = discovery.kubernetes.pods.targets

  rule {
    source_labels = [<font color="#808080">"__meta_kubernetes_namespace"</font>]
    target_label  = <font color="#808080">"namespace"</font>
  }

  rule {
    source_labels = [<font color="#808080">"__meta_kubernetes_pod_name"</font>]
    target_label  = <font color="#808080">"pod"</font>
  }

  rule {
    source_labels = [<font color="#808080">"__meta_kubernetes_pod_container_name"</font>]
    target_label  = <font color="#808080">"container"</font>
  }

  rule {
    source_labels = [<font color="#808080">"__meta_kubernetes_pod_label_app"</font>]
    target_label  = <font color="#808080">"app"</font>
  }
}

loki.<b><u><font color="#000000">source</font></u></b>.kubernetes <font color="#808080">"pods"</font> {
  targets    = discovery.relabel.pods.output
  forward_to = [loki.write.default.receiver]
}

loki.write <font color="#808080">"default"</font> {
  endpoint {
    url = <font color="#808080">"http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push"</font>
  }
}
</pre>
<br />
<span>This configuration automatically labels each log line with the namespace, pod name, container name, and app label, making it easy to filter logs in Grafana.</span><br />
<br />
<h3 style='display: inline' id='adding-loki-as-a-grafana-data-source'>Adding Loki as a Grafana data source</h3><br />
<br />
<span>Loki doesn&#39;t have its own web UI—you query it through Grafana. First, verify the Loki service is running:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get svc -n monitoring loki
NAME   TYPE        CLUSTER-IP    PORT(S)
loki   ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">64.60</font>   <font color="#000000">3100</font>/TCP,<font color="#000000">9095</font>/TCP
</pre>
<br />
<span>To add Loki as a data source in Grafana:</span><br />
<br />
<ul>
<li>Navigate to Configuration → Data Sources</li>
<li>Click "Add data source"</li>
<li>Select "Loki"</li>
<li>Set the URL to: <span class='inlinecode'>http://loki.monitoring.svc.cluster.local:3100</span></li>
<li>Click "Save &amp; Test"</li>
</ul><br />
<span>Once configured, you can explore logs in Grafana&#39;s "Explore" view. I&#39;ll show some example queries in the "Using the observability stack" section below.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/loki-explore.png'><img alt='Exploring logs in Grafana with Loki' title='Exploring logs in Grafana with Loki' src='./f3s-kubernetes-with-freebsd-part-8/loki-explore.png' /></a><br />
<br />
<h2 style='display: inline' id='the-complete-monitoring-stack'>The complete monitoring stack</h2><br />
<br />
<span>After deploying everything, here&#39;s what&#39;s running in the monitoring namespace:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get pods -n monitoring
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-<font color="#000000">0</font>   <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          42d
alloy-g5fgj                                              <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          29m
alloy-nfw8w                                              <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          29m
alloy-tg9vj                                              <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          29m
loki-<font color="#000000">0</font>                                                   <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          25m
prometheus-grafana-868f9dc7cf-lg2vl                      <font color="#000000">3</font>/<font color="#000000">3</font>     Running   <font color="#000000">0</font>          42d
prometheus-kube-prometheus-operator-8d7bbc48c-p4sf4      <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          42d
prometheus-kube-state-metrics-7c5fb9d798-hh2fx           <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          42d
prometheus-prometheus-kube-prometheus-prometheus-<font color="#000000">0</font>       <font color="#000000">2</font>/<font color="#000000">2</font>     Running   <font color="#000000">0</font>          42d
prometheus-prometheus-node-exporter-2nsg9                <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          42d
prometheus-prometheus-node-exporter-mqr<font color="#000000">25</font>                <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          42d
prometheus-prometheus-node-exporter-wp4ds                <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          42d
tempo-<font color="#000000">0</font>                                                  <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          1d
</pre>
<br />
<span>Note: Tempo (<span class='inlinecode'>tempo-0</span>) is deployed later in this post in the "Distributed Tracing with Grafana Tempo" section. It is included in the pod listing here for completeness.</span><br />
<br />
<span>And the services:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get svc -n monitoring
NAME                                      TYPE        CLUSTER-IP      PORT(S)
alertmanager-operated                     ClusterIP   None            <font color="#000000">9093</font>/TCP,<font color="#000000">9094</font>/TCP
alloy                                     ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">74.14</font>     <font color="#000000">12345</font>/TCP
loki                                      ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">64.60</font>     <font color="#000000">3100</font>/TCP,<font color="#000000">9095</font>/TCP
loki-headless                             ClusterIP   None            <font color="#000000">3100</font>/TCP
prometheus-grafana                        ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">46.82</font>     <font color="#000000">80</font>/TCP
prometheus-kube-prometheus-alertmanager   ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">208.43</font>    <font color="#000000">9093</font>/TCP,<font color="#000000">8080</font>/TCP
prometheus-kube-prometheus-operator       ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">246.121</font>   <font color="#000000">443</font>/TCP
prometheus-kube-prometheus-prometheus     ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">152.163</font>   <font color="#000000">9090</font>/TCP,<font color="#000000">8080</font>/TCP
prometheus-kube-state-metrics             ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">64.26</font>     <font color="#000000">8080</font>/TCP
prometheus-prometheus-node-exporter       ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">127.242</font>   <font color="#000000">9100</font>/TCP
tempo                                     ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">91.44</font>     <font color="#000000">3200</font>/TCP,<font color="#000000">4317</font>/TCP,<font color="#000000">4318</font>/TCP
</pre>
<br />
<span>Let me break down what each pod does:</span><br />
<br />
<ul>
<li><span class='inlinecode'>alertmanager-prometheus-kube-prometheus-alertmanager-0</span>: the Alertmanager instance that receives alerts from Prometheus, deduplicates them, groups related alerts together, and routes notifications to the appropriate receivers (email, Slack, PagerDuty, etc.). It runs as a StatefulSet with persistent storage for silences and notification state.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>alloy-g5fgj, alloy-nfw8w, alloy-tg9vj</span>: three Alloy pods running as a DaemonSet, one on each k3s node. Each pod tails the container logs from its local node via the Kubernetes API and forwards them to Loki. This ensures log collection continues even if a node becomes isolated from the others.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>loki-0</span>: the single Loki instance running in single-binary mode. It receives log streams from Alloy, stores them in chunks on the NFS-backed persistent volume, and serves queries from Grafana. The <span class='inlinecode'>-0</span> suffix indicates it&#39;s a StatefulSet pod.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>prometheus-grafana-...</span>: the Grafana web interface for visualising metrics and logs. It comes pre-configured with Prometheus as a data source and includes dozens of dashboards for Kubernetes monitoring. Dashboards, users, and settings are persisted to the NFS share.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>prometheus-kube-prometheus-operator-...</span>: the Prometheus Operator that watches for custom resources (ServiceMonitor, PodMonitor, PrometheusRule) and automatically configures Prometheus to scrape new targets. This allows applications to declare their own monitoring requirements.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>prometheus-kube-state-metrics-...</span>: generates metrics about the state of Kubernetes objects themselves: how many pods are running, pending, or failed; deployment replica counts; node conditions; PVC status; and more. Essential for cluster-level dashboards.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>prometheus-prometheus-kube-prometheus-prometheus-0</span>: the Prometheus server that scrapes metrics from all configured targets (pods, services, nodes), stores them in a time-series database, evaluates alerting rules, and serves queries to Grafana.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>prometheus-prometheus-node-exporter-...</span>: three Node Exporter pods running as a DaemonSet, one on each node. They expose hardware and OS-level metrics: CPU usage, memory, disk I/O, filesystem usage, network statistics, and more. These feed the "Node Exporter" dashboards in Grafana.</li>
</ul><br />
<ul>
<li><span class='inlinecode'>tempo-0</span>: the Grafana Tempo instance for distributed tracing. It receives trace data from Alloy via OTLP (OpenTelemetry Protocol), stores traces on the NFS-backed persistent volume, and serves queries to Grafana. Tempo is covered in detail in the "Distributed Tracing with Grafana Tempo" section later in this post.</li>
</ul><br />
<h2 style='display: inline' id='using-the-observability-stack'>Using the observability stack</h2><br />
<br />
<h3 style='display: inline' id='viewing-metrics-in-grafana'>Viewing metrics in Grafana</h3><br />
<br />
<span>The kube-prometheus-stack comes with many pre-built dashboards. Some useful ones include:</span><br />
<br />
<ul>
<li>Kubernetes / Compute Resources / Cluster: overview of CPU and memory usage across the cluster</li>
<li>Kubernetes / Compute Resources / Namespace (Pods): resource usage by namespace</li>
<li>Node Exporter / Nodes: detailed host metrics like disk I/O, network, and CPU</li>
</ul><br />
<h3 style='display: inline' id='querying-logs-with-logql'>Querying logs with LogQL</h3><br />
<br />
<span>In Grafana&#39;s Explore view, select Loki as the data source and try queries like:</span><br />
<br />
<pre>
# All logs from the services namespace
{namespace="services"}

# Logs from pods matching a pattern
{pod=~"miniflux.*"}

# Filter by log content
{namespace="services"} |= "error"

# Parse JSON logs and filter
{namespace="services"} | json | level="error"
</pre>
<br />
<h3 style='display: inline' id='creating-alerts'>Creating alerts</h3><br />
<br />
<span>Prometheus supports alerting rules that can notify you when something goes wrong. The kube-prometheus-stack includes many default alerts for common issues like high CPU usage, pod crashes, and node problems. These can be customised via PrometheusRule CRDs.</span><br />
<br />
<h2 style='display: inline' id='monitoring-external-freebsd-hosts'>Monitoring external FreeBSD hosts</h2><br />
<br />
<span>The observability stack can also monitor servers outside the Kubernetes cluster. The FreeBSD hosts (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) that serve NFS storage can be added to Prometheus using the Node Exporter.</span><br />
<br />
<h3 style='display: inline' id='installing-node-exporter-on-freebsd'>Installing Node Exporter on FreeBSD</h3><br />
<br />
<span>On each FreeBSD host, install the node_exporter package:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install -y node_exporter
</pre>
<br />
<span>Enable the service to start at boot:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas sysrc node_exporter_enable=YES
node_exporter_enable:  -&gt; YES
</pre>
<br />
<span>Configure node_exporter to listen on the WireGuard interface. This ensures metrics are only accessible through the secure tunnel, not the public network. Replace the IP with the host&#39;s WireGuard address:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas sysrc node_exporter_args=<font color="#808080">'--web.listen-address=192.168.2.130:9100'</font>
node_exporter_args:  -&gt; --web.listen-address=<font color="#000000">192.168</font>.<font color="#000000">2.130</font>:<font color="#000000">9100</font>
</pre>
<br />
<span>Start the service:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas service node_exporter start
Starting node_exporter.
</pre>
<br />
<span>Verify it&#39;s running:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % curl -s http://<font color="#000000">192.168</font>.<font color="#000000">2.130</font>:<font color="#000000">9100</font>/metrics | head -<font color="#000000">3</font>
<i><font color="silver"># HELP go_gc_duration_seconds A summary of the wall-time pause...</font></i>
<i><font color="silver"># TYPE go_gc_duration_seconds summary</font></i>
go_gc_duration_seconds{quantile=<font color="#808080">"0"</font>} <font color="#000000">0</font>
</pre>
<br />
<span>Repeat for the other FreeBSD hosts (<span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) with their respective WireGuard IPs.</span><br />
<br />
<h3 style='display: inline' id='adding-freebsd-hosts-to-prometheus'>Adding FreeBSD hosts to Prometheus</h3><br />
<br />
<span>Create a file <span class='inlinecode'>additional-scrape-configs.yaml</span> in the prometheus configuration directory:</span><br />
<br />
<pre>
- job_name: &#39;node-exporter&#39;
  static_configs:
    - targets:
      - &#39;192.168.2.130:9100&#39;  # f0 via WireGuard
      - &#39;192.168.2.131:9100&#39;  # f1 via WireGuard
      - &#39;192.168.2.132:9100&#39;  # f2 via WireGuard
      labels:
        os: freebsd
</pre>
<br />
<span>The <span class='inlinecode'>job_name</span> must be <span class='inlinecode'>node-exporter</span> to match the existing dashboards. The <span class='inlinecode'>os: freebsd</span> label allows filtering these hosts separately if needed.</span><br />
<br />
<span>Create a Kubernetes secret from this file:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl create secret generic additional-scrape-configs \
    --from-file=additional-scrape-configs.yaml \
    -n monitoring
</pre>
<br />
<span>Update <span class='inlinecode'>persistence-values.yaml</span> to reference the secret:</span><br />
<br />
<pre>
prometheus:
  prometheusSpec:
    additionalScrapeConfigsSecret:
      enabled: true
      name: additional-scrape-configs
      key: additional-scrape-configs.yaml
</pre>
<br />
<span>Upgrade the Prometheus deployment:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ just upgrade
</pre>
<br />
<span>After a minute or so, the FreeBSD hosts appear in the Prometheus targets and in the Node Exporter dashboards in Grafana.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-freebsd-nodes.png'><img alt='FreeBSD hosts in the Node Exporter dashboard' title='FreeBSD hosts in the Node Exporter dashboard' src='./f3s-kubernetes-with-freebsd-part-8/grafana-freebsd-nodes.png' /></a><br />
<br />
<h3 style='display: inline' id='freebsd-memory-metrics-compatibility'>FreeBSD memory metrics compatibility</h3><br />
<br />
<span>The default Node Exporter dashboards are designed for Linux and expect metrics like <span class='inlinecode'>node_memory_MemAvailable_bytes</span>. FreeBSD uses different metric names (<span class='inlinecode'>node_memory_size_bytes</span>, <span class='inlinecode'>node_memory_free_bytes</span>, etc.), so memory panels will show "No data" out of the box.</span><br />
<br />
<span>To fix this, I created a PrometheusRule that generates synthetic Linux-compatible metrics from the FreeBSD equivalents:</span><br />
<br />
<pre>
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: freebsd-memory-rules
  namespace: monitoring
  labels:
    release: prometheus
spec:
  groups:
    - name: freebsd-memory
      rules:
        - record: node_memory_MemTotal_bytes
          expr: node_memory_size_bytes{os="freebsd"}
        - record: node_memory_MemAvailable_bytes
          expr: |
            node_memory_free_bytes{os="freebsd"}
              + node_memory_inactive_bytes{os="freebsd"}
              + node_memory_cache_bytes{os="freebsd"}
        - record: node_memory_MemFree_bytes
          expr: node_memory_free_bytes{os="freebsd"}
        - record: node_memory_Buffers_bytes
          expr: node_memory_buffer_bytes{os="freebsd"}
        - record: node_memory_Cached_bytes
          expr: node_memory_cache_bytes{os="freebsd"}
</pre>
<br />
<span>This file is saved as <span class='inlinecode'>freebsd-recording-rules.yaml</span> and applied as part of the Prometheus installation. The <span class='inlinecode'>os="freebsd"</span> label (set in the scrape config) ensures these rules only apply to FreeBSD hosts. After applying, the memory panels in the Node Exporter dashboards populate correctly for FreeBSD.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus/freebsd-recording-rules.yaml'>freebsd-recording-rules.yaml on Codeberg</a><br />
<br />
<h3 style='display: inline' id='disk-io-metrics-limitation'>Disk I/O metrics limitation</h3><br />
<br />
<span>Unlike memory metrics, disk I/O metrics (<span class='inlinecode'>node_disk_read_bytes_total</span>, <span class='inlinecode'>node_disk_written_bytes_total</span>, etc.) are not available on FreeBSD. The Linux diskstats collector that provides these metrics doesn&#39;t have a FreeBSD equivalent in the node_exporter.</span><br />
<br />
<span>The disk I/O panels in the Node Exporter dashboards will show "No data" for FreeBSD hosts. FreeBSD does expose ZFS-specific metrics (<span class='inlinecode'>node_zfs_arcstats_*</span>) for ARC cache performance, and per-dataset I/O stats are available via <span class='inlinecode'>sysctl kstat.zfs</span>, but mapping these to the Linux-style metrics the dashboards expect is non-trivial. To address this, I created custom ZFS-specific dashboards, covered in the next section.</span><br />
<br />
<h2 style='display: inline' id='zfs-monitoring-for-freebsd-servers'>ZFS Monitoring for FreeBSD Servers</h2><br />
<br />
<span>The FreeBSD servers (f0, f1, f2) that provide NFS storage to the k3s cluster have ZFS filesystems. Monitoring ZFS performance is crucial for understanding storage performance and cache efficiency.</span><br />
<br />
<h3 style='display: inline' id='node-exporter-zfs-collector'>Node Exporter ZFS Collector</h3><br />
<br />
<span>The node_exporter running on each FreeBSD server (v1.9.1) includes a built-in ZFS collector that exposes metrics via sysctls. The ZFS collector is enabled by default and provides:</span><br />
<br />
<ul>
<li>ARC (Adaptive Replacement Cache) statistics</li>
<li>Cache hit/miss rates</li>
<li>Memory usage and allocation</li>
<li>MRU/MFU cache breakdown</li>
<li>Data vs metadata distribution</li>
</ul><br />
<h3 style='display: inline' id='verifying-zfs-metrics'>Verifying ZFS Metrics</h3><br />
<br />
<span>On any FreeBSD server, check that ZFS metrics are being exposed:</span><br />
<br />
<pre>
paul@f0:~ % curl -s http://localhost:9100/metrics | grep node_zfs_arcstats | wc -l
      69
</pre>
<br />
<span>The metrics are automatically scraped by Prometheus through the existing static configuration in additional-scrape-configs.yaml which targets all FreeBSD servers on port 9100 with the os: freebsd label.</span><br />
<br />
<h3 style='display: inline' id='zfs-recording-rules'>ZFS Recording Rules</h3><br />
<br />
<span>Created recording rules for easier dashboard consumption in zfs-recording-rules.yaml:</span><br />
<br />
<pre>
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: freebsd-zfs-rules
  namespace: monitoring
  labels:
    release: prometheus
spec:
  groups:
    - name: freebsd-zfs-arc
      interval: 30s
      rules:
        - record: node_zfs_arc_hit_rate_percent
          expr: |
            100 * (
              rate(node_zfs_arcstats_hits_total{os="freebsd"}[5m]) /
              (rate(node_zfs_arcstats_hits_total{os="freebsd"}[5m]) +
               rate(node_zfs_arcstats_misses_total{os="freebsd"}[5m]))
            )
          labels:
            os: freebsd
        - record: node_zfs_arc_memory_usage_percent
          expr: |
            100 * (
              node_zfs_arcstats_size_bytes{os="freebsd"} /
              node_zfs_arcstats_c_max_bytes{os="freebsd"}
            )
          labels:
            os: freebsd
        # Additional rules for metadata %, target %, MRU/MFU %, etc.
</pre>
<br />
<span>These recording rules calculate:</span><br />
<br />
<ul>
<li>ARC hit rate percentage</li>
<li>ARC memory usage percentage (current vs maximum)</li>
<li>ARC target percentage (target vs maximum)</li>
<li>Metadata vs data percentages</li>
<li>MRU vs MFU cache percentages</li>
<li>Demand data and metadata hit rates</li>
</ul><br />
<h3 style='display: inline' id='grafana-dashboards'>Grafana Dashboards</h3><br />
<br />
<span>Created two comprehensive ZFS monitoring dashboards (zfs-dashboards.yaml):</span><br />
<br />
<span>**Dashboard 1: FreeBSD ZFS (per-host detailed view)**</span><br />
<br />
<span>Includes variables to select:</span><br />
<br />
<ul>
<li>FreeBSD server (f0, f1, or f2)</li>
<li>ZFS pool (zdata, zroot, or all)</li>
</ul><br />
<span>Pool Overview Row:</span><br />
<br />
<ul>
<li>Pool Capacity gauge (with thresholds: green &lt;70%, yellow &lt;85%, red &gt;85%)</li>
<li>Pool Health status (ONLINE/DEGRADED/FAULTED with color coding)</li>
<li>Total Pool Size stat</li>
<li>Free Space stat</li>
<li>Pool Space Usage Over Time (stacked: used + free)</li>
<li>Pool Capacity Trend time series</li>
</ul><br />
<span>Dataset Statistics Row:</span><br />
<br />
<ul>
<li>Table showing all datasets with columns: Pool, Dataset, Used, Available, Referenced</li>
<li>Automatically filters by selected pool</li>
</ul><br />
<span>ARC Cache Statistics Row:</span><br />
<br />
<ul>
<li>ARC Hit Rate gauge (red &lt;70%, yellow &lt;90%, green &gt;=90%)</li>
<li>ARC Size time series (current, target, max)</li>
<li>ARC Memory Usage percentage gauge</li>
<li>ARC Hits vs Misses rate</li>
<li>ARC Data vs Metadata stacked time series</li>
</ul><br />
<span>**Dashboard 2: FreeBSD ZFS Summary (cluster-wide overview)**</span><br />
<br />
<span>Cluster-Wide Pool Statistics Row:</span><br />
<br />
<ul>
<li>Total Storage Capacity across all servers</li>
<li>Total Used space</li>
<li>Total Free space</li>
<li>Average Pool Capacity gauge</li>
<li>Pool Health Status (worst case across cluster)</li>
<li>Total Pool Space Usage Over Time</li>
<li>Per-Pool Capacity time series (all pools on all hosts)</li>
</ul><br />
<span>Per-Host Pool Breakdown Row:</span><br />
<br />
<ul>
<li>Bar gauge showing capacity by host and pool</li>
<li>Table with all pools: Host, Pool, Size, Used, Free, Capacity %, Health</li>
</ul><br />
<span>Cluster-Wide ARC Statistics Row:</span><br />
<br />
<ul>
<li>Average ARC Hit Rate gauge across all hosts</li>
<li>ARC Hit Rate by Host time series</li>
<li>Total ARC Size Across Cluster</li>
<li>Total ARC Hits vs Misses (cluster-wide sum)</li>
<li>ARC Size by Host</li>
</ul><br />
<span>Dashboard Visualization:</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-dashboard.png'><img alt='ZFS monitoring dashboard in Grafana showing pool capacity, health, and I/O throughput' title='ZFS monitoring dashboard in Grafana showing pool capacity, health, and I/O throughput' src='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-dashboard.png' /></a><br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-arc-stats.png'><img alt='ZFS ARC cache statistics showing hit rate, memory usage, and size trends' title='ZFS ARC cache statistics showing hit rate, memory usage, and size trends' src='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-arc-stats.png' /></a><br />
<a href='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-datasets.png'><img alt='ZFS datasets table and ARC data vs metadata breakdown' title='ZFS datasets table and ARC data vs metadata breakdown' src='./f3s-kubernetes-with-freebsd-part-8/grafana-zfs-datasets.png' /></a><br />
<br />
<h3 style='display: inline' id='deployment'>Deployment</h3><br />
<br />
<span>Applied the resources to the cluster:</span><br />
<br />
<pre>
cd /home/paul/git/conf/f3s/prometheus
kubectl apply -f zfs-recording-rules.yaml
kubectl apply -f zfs-dashboards.yaml
</pre>
<br />
<span>Updated Justfile to include ZFS recording rules in install and upgrade targets:</span><br />
<br />
<pre>
install:
    kubectl apply -f persistent-volumes.yaml
    kubectl create secret generic additional-scrape-configs --from-file=additional-scrape-configs.yaml -n monitoring --dry-run=client -o yaml | kubectl apply -f -
    helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring -f persistence-values.yaml
    kubectl apply -f freebsd-recording-rules.yaml
    kubectl apply -f openbsd-recording-rules.yaml
    kubectl apply -f zfs-recording-rules.yaml
    just -f grafana-ingress/Justfile install
</pre>
<br />
<h3 style='display: inline' id='verifying-zfs-metrics-in-prometheus'>Verifying ZFS Metrics in Prometheus</h3><br />
<br />
<span>Check that ZFS metrics are being collected:</span><br />
<br />
<pre>
kubectl exec -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 -c prometheus -- \
  wget -qO- &#39;http://localhost:9090/api/v1/query?query=node_zfs_arcstats_size_bytes&#39;
</pre>
<br />
<span>Check recording rules are calculating correctly:</span><br />
<br />
<pre>
kubectl exec -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0 -c prometheus -- \
  wget -qO- &#39;http://localhost:9090/api/v1/query?query=node_zfs_arc_memory_usage_percent&#39;
</pre>
<br />
<span>Example output shows memory usage percentage for each FreeBSD server:</span><br />
<br />
<pre>
"result":[
  {"metric":{"instance":"192.168.2.130:9100","os":"freebsd"},"value":[...,"37.58"]},
  {"metric":{"instance":"192.168.2.131:9100","os":"freebsd"},"value":[...,"12.85"]},
  {"metric":{"instance":"192.168.2.132:9100","os":"freebsd"},"value":[...,"13.44"]}
]
</pre>
<br />
<h3 style='display: inline' id='key-metrics-to-monitor'>Key Metrics to Monitor</h3><br />
<br />
<ul>
<li>ARC Hit Rate: Should typically be above 90% for optimal performance. Lower hit rates indicate the ARC cache is too small or workload has poor locality.</li>
<li>ARC Memory Usage: Shows how much of the maximum ARC size is being used. If consistently at or near maximum, the ARC is effectively utilizing available memory.</li>
<li>Data vs Metadata: Typically data should dominate, but workloads with many small files will show higher metadata percentages.</li>
<li>MRU vs MFU: Most Recently Used vs Most Frequently Used cache. The ratio depends on workload characteristics.</li>
<li>Pool Capacity: Monitor pool usage to ensure adequate free space. ZFS performance degrades when pools exceed 80% capacity.</li>
<li>Pool Health: Should always show ONLINE (green). DEGRADED (yellow) indicates a disk issue requiring attention. FAULTED (red) requires immediate action.</li>
<li>Dataset Usage: Track which datasets are consuming the most space to identify growth trends and plan capacity.</li>
</ul><br />
<h3 style='display: inline' id='zfs-pool-and-dataset-metrics-via-textfile-collector'>ZFS Pool and Dataset Metrics via Textfile Collector</h3><br />
<br />
<span>To complement the ARC statistics from node_exporter&#39;s built-in ZFS collector, I added pool capacity and dataset metrics using the textfile collector feature.</span><br />
<br />
<span>Created a script at <span class='inlinecode'>/usr/local/bin/zfs_pool_metrics.sh</span> on each FreeBSD server:</span><br />
<br />
<pre>
#!/bin/sh
# ZFS Pool and Dataset Metrics Collector for Prometheus

OUTPUT_FILE="/var/tmp/node_exporter/zfs_pools.prom.$$"
FINAL_FILE="/var/tmp/node_exporter/zfs_pools.prom"

mkdir -p /var/tmp/node_exporter

{
    # Pool metrics
    echo "# HELP zfs_pool_size_bytes Total size of ZFS pool"
    echo "# TYPE zfs_pool_size_bytes gauge"
    echo "# HELP zfs_pool_allocated_bytes Allocated space in ZFS pool"
    echo "# TYPE zfs_pool_allocated_bytes gauge"
    echo "# HELP zfs_pool_free_bytes Free space in ZFS pool"
    echo "# TYPE zfs_pool_free_bytes gauge"
    echo "# HELP zfs_pool_capacity_percent Capacity percentage"
    echo "# TYPE zfs_pool_capacity_percent gauge"
    echo "# HELP zfs_pool_health Pool health (0=ONLINE, 1=DEGRADED, 2=FAULTED)"
    echo "# TYPE zfs_pool_health gauge"

    zpool list -Hp -o name,size,allocated,free,capacity,health | \
    while IFS=$&#39;\t&#39; read name size alloc free cap health; do
        case "$health" in
            ONLINE)   health_val=0 ;;
            DEGRADED) health_val=1 ;;
            FAULTED)  health_val=2 ;;
            *)        health_val=6 ;;
        esac
        cap_num=$(echo "$cap" | sed &#39;s/%//&#39;)

        echo "zfs_pool_size_bytes{pool=\"$name\"} $size"
        echo "zfs_pool_allocated_bytes{pool=\"$name\"} $alloc"
        echo "zfs_pool_free_bytes{pool=\"$name\"} $free"
        echo "zfs_pool_capacity_percent{pool=\"$name\"} $cap_num"
        echo "zfs_pool_health{pool=\"$name\"} $health_val"
    done

    # Dataset metrics
    echo "# HELP zfs_dataset_used_bytes Used space in dataset"
    echo "# TYPE zfs_dataset_used_bytes gauge"
    echo "# HELP zfs_dataset_available_bytes Available space"
    echo "# TYPE zfs_dataset_available_bytes gauge"
    echo "# HELP zfs_dataset_referenced_bytes Referenced space"
    echo "# TYPE zfs_dataset_referenced_bytes gauge"

    zfs list -Hp -t filesystem -o name,used,available,referenced | \
    while IFS=$&#39;\t&#39; read name used avail ref; do
        pool=$(echo "$name" | cut -d/ -f1)
        echo "zfs_dataset_used_bytes{pool=\"$pool\",dataset=\"$name\"} $used"
        echo "zfs_dataset_available_bytes{pool=\"$pool\",dataset=\"$name\"} $avail"
        echo "zfs_dataset_referenced_bytes{pool=\"$pool\",dataset=\"$name\"} $ref"
    done
} &gt; "$OUTPUT_FILE"

mv "$OUTPUT_FILE" "$FINAL_FILE"
</pre>
<br />
<span>Deployed to all FreeBSD servers:</span><br />
<br />
<pre>
for host in f0 f1 f2; do
    scp /tmp/zfs_pool_metrics.sh paul@$host:/tmp/
    ssh paul@$host &#39;doas mv /tmp/zfs_pool_metrics.sh /usr/local/bin/ &amp;&amp; \
                    doas chmod +x /usr/local/bin/zfs_pool_metrics.sh&#39;
done
</pre>
<br />
<span>Set up cron jobs to run every minute:</span><br />
<br />
<pre>
for host in f0 f1 f2; do
    ssh paul@$host &#39;echo "* * * * * /usr/local/bin/zfs_pool_metrics.sh &gt;/dev/null 2&gt;&amp;1" | \
                    doas crontab -&#39;
done
</pre>
<br />
<span>The textfile collector (already configured with --collector.textfile.directory=/var/tmp/node_exporter) automatically picks up the metrics.</span><br />
<br />
<span>Verify metrics are being exposed:</span><br />
<br />
<pre>
paul@f0:~ % curl -s http://localhost:9100/metrics | grep "^zfs_pool" | head -5
zfs_pool_allocated_bytes{pool="zdata"} 6.47622733824e+11
zfs_pool_allocated_bytes{pool="zroot"} 5.3338578944e+10
zfs_pool_capacity_percent{pool="zdata"} 64
zfs_pool_capacity_percent{pool="zroot"} 10
zfs_pool_free_bytes{pool="zdata"} 3.48809678848e+11
</pre>
<br />
<span>All ZFS-related configuration files are available on Codeberg:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus/zfs-recording-rules.yaml'>zfs-recording-rules.yaml on Codeberg</a><br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus/zfs-dashboards.yaml'>zfs-dashboards.yaml on Codeberg</a><br />
<br />
<h2 style='display: inline' id='monitoring-external-openbsd-hosts'>Monitoring external OpenBSD hosts</h2><br />
<br />
<span>The same approach works for OpenBSD hosts. I have two OpenBSD edge relay servers (<span class='inlinecode'>blowfish</span>, <span class='inlinecode'>fishfinger</span>) that handle TLS termination and forward traffic through WireGuard to the cluster. These can also be monitored with Node Exporter.</span><br />
<br />
<h3 style='display: inline' id='installing-node-exporter-on-openbsd'>Installing Node Exporter on OpenBSD</h3><br />
<br />
<span>On each OpenBSD host, install the node_exporter package:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish:~ $ doas pkg_add node_exporter
quirks-<font color="#000000">7.103</font> signed on <font color="#000000">2025</font>-<font color="#000000">10</font>-13T22:<font color="#000000">55</font>:16Z
The following new rcscripts were installed: /etc/rc.d/node_exporter
See rcctl(<font color="#000000">8</font>) <b><u><font color="#000000">for</font></u></b> details.
</pre>
<br />
<span>Enable the service to start at boot:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish:~ $ doas rcctl <b><u><font color="#000000">enable</font></u></b> node_exporter
</pre>
<br />
<span>Configure node_exporter to listen on the WireGuard interface. This ensures metrics are only accessible through the secure tunnel, not the public network. Replace the IP with the host&#39;s WireGuard address:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish:~ $ doas rcctl <b><u><font color="#000000">set</font></u></b> node_exporter flags <font color="#808080">'--web.listen-address=192.168.2.110:9100'</font>
</pre>
<br />
<span>Start the service:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish:~ $ doas rcctl start node_exporter
node_exporter(ok)
</pre>
<br />
<span>Verify it&#39;s running:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish:~ $ curl -s http://<font color="#000000">192.168</font>.<font color="#000000">2.110</font>:<font color="#000000">9100</font>/metrics | head -<font color="#000000">3</font>
<i><font color="silver"># HELP go_gc_duration_seconds A summary of the wall-time pause...</font></i>
<i><font color="silver"># TYPE go_gc_duration_seconds summary</font></i>
go_gc_duration_seconds{quantile=<font color="#808080">"0"</font>} <font color="#000000">0</font>
</pre>
<br />
<span>Repeat for the other OpenBSD host (<span class='inlinecode'>fishfinger</span>) with its respective WireGuard IP (<span class='inlinecode'>192.168.2.111</span>).</span><br />
<br />
<h3 style='display: inline' id='adding-openbsd-hosts-to-prometheus'>Adding OpenBSD hosts to Prometheus</h3><br />
<br />
<span>Update <span class='inlinecode'>additional-scrape-configs.yaml</span> to include the OpenBSD targets:</span><br />
<br />
<pre>
- job_name: &#39;node-exporter&#39;
  static_configs:
    - targets:
      - &#39;192.168.2.130:9100&#39;  # f0 via WireGuard
      - &#39;192.168.2.131:9100&#39;  # f1 via WireGuard
      - &#39;192.168.2.132:9100&#39;  # f2 via WireGuard
      labels:
        os: freebsd
    - targets:
      - &#39;192.168.2.110:9100&#39;  # blowfish via WireGuard
      - &#39;192.168.2.111:9100&#39;  # fishfinger via WireGuard
      labels:
        os: openbsd
</pre>
<br />
<span>The <span class='inlinecode'>os: openbsd</span> label allows filtering these hosts separately from FreeBSD and Linux nodes.</span><br />
<br />
<h3 style='display: inline' id='openbsd-memory-metrics-compatibility'>OpenBSD memory metrics compatibility</h3><br />
<br />
<span>OpenBSD uses the same memory metric names as FreeBSD (<span class='inlinecode'>node_memory_size_bytes</span>, <span class='inlinecode'>node_memory_free_bytes</span>, etc.), so a similar PrometheusRule is needed to generate Linux-compatible metrics:</span><br />
<br />
<pre>
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: openbsd-memory-rules
  namespace: monitoring
  labels:
    release: prometheus
spec:
  groups:
    - name: openbsd-memory
      rules:
        - record: node_memory_MemTotal_bytes
          expr: node_memory_size_bytes{os="openbsd"}
          labels:
            os: openbsd
        - record: node_memory_MemAvailable_bytes
          expr: |
            node_memory_free_bytes{os="openbsd"}
              + node_memory_inactive_bytes{os="openbsd"}
              + node_memory_cache_bytes{os="openbsd"}
          labels:
            os: openbsd
        - record: node_memory_MemFree_bytes
          expr: node_memory_free_bytes{os="openbsd"}
          labels:
            os: openbsd
        - record: node_memory_Cached_bytes
          expr: node_memory_cache_bytes{os="openbsd"}
          labels:
            os: openbsd
</pre>
<br />
<span>This file is saved as <span class='inlinecode'>openbsd-recording-rules.yaml</span> and applied alongside the FreeBSD rules. Note that OpenBSD doesn&#39;t expose a buffer memory metric, so that rule is omitted.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus/openbsd-recording-rules.yaml'>openbsd-recording-rules.yaml on Codeberg</a><br />
<br />
<span>After running <span class='inlinecode'>just upgrade</span>, the OpenBSD hosts appear in Prometheus targets and the Node Exporter dashboards.</span><br />
<br />
<h2 style='display: inline' id='summary'>Summary</h2><br />
<br />
<span>With Prometheus, Grafana, Loki, and Alloy deployed, I now have visibility into the k3s cluster, the FreeBSD storage servers, and the OpenBSD edge relays:</span><br />
<br />
<ul>
<li>Metrics: Prometheus collects and stores time-series data from all components, including etcd and ZFS</li>
<li>Logs: Loki aggregates logs from all containers, searchable via Grafana</li>
<li>Visualisation: Grafana provides dashboards and exploration tools</li>
<li>Alerting: Alertmanager can notify on conditions defined in Prometheus rules</li>
</ul><br />
<span>The next part covers the final pillar of observability: distributed tracing with Grafana Tempo.</span><br />
<br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>Part 8b: Distributed Tracing with Tempo</a><br />
<br />
<span>All configuration files are available on Codeberg:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/prometheus'>Prometheus, Grafana, and recording rules configuration</a><br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/loki'>Loki and Alloy configuration</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability (You are currently reading this)</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>'The Courage To Be Disliked' book notes</title>
        <link href="https://foo.zone/gemfeed/2025-11-02-the-courage-to-be-disliked-book-notes.html" />
        <id>https://foo.zone/gemfeed/2025-11-02-the-courage-to-be-disliked-book-notes.html</id>
        <updated>2025-11-01T17:28:38+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>These are my personal book notes from Ichiro Kishimi and Fumitake Koga's 'The Courage To Be Disliked'. They are for me, but I hope they might be useful to you too.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='the-courage-to-be-disliked-book-notes'>"The Courage To Be Disliked" book notes</h1><br />
<br />
<span class='quote'>Published at 2025-11-01T17:28:38+02:00</span><br />
<br />
<span>These are my personal book notes from Ichiro Kishimi and Fumitake Koga&#39;s "The Courage To Be Disliked". They are for me, but I hope they might be useful to you too.</span><br />
<br />
<pre>
         ,..........   ..........,
     ,..,&#39;          &#39;.&#39;          &#39;,..,
    ,&#39; ,&#39;            :            &#39;, &#39;,
   ,&#39; ,&#39;             :             &#39;, &#39;,
  ,&#39; ,&#39;              :              &#39;, &#39;,
 ,&#39; ,&#39;............., : ,.............&#39;, &#39;,
,&#39;  &#39;............   &#39;.&#39;   ............&#39;  &#39;,
 &#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;;&#39;&#39;&#39;;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;
                    &#39;&#39;&#39;
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#the-courage-to-be-disliked-book-notes'>"The Courage To Be Disliked" book notes</a></li>
<li>⇢ <a href='#the-nature-of-life-and-happiness'>The Nature of Life and Happiness</a></li>
<li>⇢ <a href='#subjective-reality-and-perception'>Subjective Reality and Perception</a></li>
<li>⇢ <a href='#the-power-to-change-and-the-role-of-the-past'>The Power to Change and the Role of the Past</a></li>
<li>⇢ <a href='#self-acceptance-lifestyle-and-life-lies'>Self-Acceptance, Lifestyle, and Life Lies</a></li>
<li>⇢ <a href='#interpersonal-relationships'>Interpersonal Relationships</a></li>
<li>⇢ <a href='#inferiority-and-superiority'>Inferiority and Superiority</a></li>
<li>⇢ <a href='#community-contribution-and-happiness'>Community, Contribution, and Happiness</a></li>
<li>⇢ <a href='#living-in-the-here-and-now'>Living in the Here and Now</a></li>
<li>⇢ <a href='#the-courage-to-be-normal'>The Courage to Be Normal</a></li>
<li>⇢ <a href='#freedom-is-being-disliked'>Freedom is Being Disliked</a></li>
<li>⇢ <a href='#the-meaning-of-life'>The Meaning of Life</a></li>
</ul><br />
<h2 style='display: inline' id='the-nature-of-life-and-happiness'>The Nature of Life and Happiness</h2><br />
<br />
<ul>
<li>Life and the world are fundamentally simple; we are the ones who make them complicated. Drama does not exist.</li>
<li>Happiness is a choice and is attainable for everyone. Often, we lack the courage to be happy because it&#39;s easier to stay in a familiar, albeit unhappy, situation than to choose a new lifestyle, which may bring anxiety and unknowns.</li>
<li>Unhappiness is something you choose for yourself.</li>
</ul><br />
<h2 style='display: inline' id='subjective-reality-and-perception'>Subjective Reality and Perception</h2><br />
<br />
<ul>
<li>Our perception of the world is subjective. We don&#39;t see the world as it is, but as we are.</li>
<li>The world you see is different from the one I see, and it&#39;s impossible to truly share your world with anyone else.</li>
</ul><br />
<span>This is illustrated by the "10 people" example: if one person dislikes you, two love you, and seven are indifferent, focusing only on the one who dislikes you gives a distorted and negative view of your life. You are focusing on a tiny, insignificant part and judging the whole by it.</span><br />
<br />
<span>The challenge is to find the courage to see the world directly, without the filters of our own subjective views.</span><br />
<br />
<h2 style='display: inline' id='the-power-to-change-and-the-role-of-the-past'>The Power to Change and the Role of the Past</h2><br />
<br />
<ul>
<li>We are not defined by our past experiences but by the meaning we assign to them. The past does not determine our future.</li>
<li>The book rejects Freudian etiology (the idea that past trauma defines us) in favor of teleology (the idea that we are driven by our present goals).</li>
<li>Change is possible for everyone at any moment, regardless of their circumstances or age. This change must come from your own doing, not from others.</li>
<li>We live in accordance with our present goals, not past causes. The past does not exist; the only issue is the present.</li>
<li>Emotions, like anger, can be fabricated tools used to achieve a goal (e.g., to control or shout at someone) rather than uncontrollable forces that rule us.</li>
</ul><br />
<h2 style='display: inline' id='self-acceptance-lifestyle-and-life-lies'>Self-Acceptance, Lifestyle, and Life Lies</h2><br />
<br />
<ul>
<li>Your "lifestyle"—your worldview and outlook on life—is a choice, not a fixed personality trait. You can change it instantly.</li>
<li>The key is self-acceptance, not self-affirmation. Accept what you cannot change and have the courage to change what you can.</li>
<li>You cannot be reborn as someone else. It is better to learn to love yourself and make the best use of the "equipment" you were born with.</li>
<li>Workaholism is a "life lie." It is a form of being in disharmony with life, using work as an excuse to avoid other life tasks and responsibilities.</li>
</ul><br />
<h2 style='display: inline' id='interpersonal-relationships'>Interpersonal Relationships</h2><br />
<br />
<ul>
<li>All problems are, at their core, problems of interpersonal relationships. To escape all problems would mean to live alone in the universe, which is impossible.</li>
<li>The book identifies three "Life Tasks" that everyone faces: the task of work, the task of friendship, and the task of love.</li>
<li>**Competition:** Life is not a competition. When we stop comparing ourselves to others, we cease to see them as enemies. They become comrades, and we can genuinely celebrate their successes. This removes the fear of losing and allows for peace.</li>
<li>**Power Struggles:** When someone is angry with you, recognize it as their attempt at a power struggle. The person who attacks you is the one with the problem. Do not get drawn in. Arguing about who is right or wrong is a trap. Admitting a fault is not a defeat.</li>
<li>**Horizontal vs. Vertical Relationships:** Strive for "horizontal relationships" based on equality, rather than "vertical relationships" based on hierarchy. Praise and rebuke are forms of manipulation found in vertical relationships. Instead, offer encouragement. (Note: The original author expresses disagreement with applying this to children, feeling a hierarchy is necessary and that children appreciate praise).</li>
<li>**Separation of Tasks:** Understand what is your responsibility and what is someone else&#39;s. For example, if someone takes advantage of your trust, that is their task. Your task is to decide whether to trust them in the first place.</li>
<li>**Confidence in Others:** Having unconditional confidence in others helps build deep relationships and a sense of belonging, turning others into comrades.</li>
</ul><br />
<h2 style='display: inline' id='inferiority-and-superiority'>Inferiority and Superiority</h2><br />
<br />
<ul>
<li>A feeling of inferiority is not inherently bad; it can be a catalyst for growth when we compare ourselves to our ideal self. This "pursuit of superiority" drives progress.</li>
<li>This is different from an "inferiority complex," which is using feelings of inadequacy as an excuse to avoid change and responsibility.</li>
<li>Value is based on a social context. An object&#39;s worth is subjective and can be reinterpreted.</li>
</ul><br />
<h2 style='display: inline' id='community-contribution-and-happiness'>Community, Contribution, and Happiness</h2><br />
<br />
<ul>
<li>The definition of happiness is the feeling of contribution.</li>
<li>A true sense of self-worth comes from feeling useful to a community (the "community feeling").</li>
<li>This contribution doesn&#39;t have to be grand. You can be of worth to the community simply by being.</li>
<li>When you have a genuine feeling of contribution, you no longer need recognition or praise from others.</li>
</ul><br />
<h2 style='display: inline' id='living-in-the-here-and-now'>Living in the Here and Now</h2><br />
<br />
<ul>
<li>Life is a series of moments ("dots"), not a continuous line. We should live fully in the "here and now."</li>
<li>The greatest life lie is to dwell on the past and the future, which do not exist, instead of focusing on the present moment.</li>
<li>Focus on the process, not just the outcome. The goal of a dance is the dancing itself, not just reaching a destination.</li>
</ul><br />
<h2 style='display: inline' id='the-courage-to-be-normal'>The Courage to Be Normal</h2><br />
<br />
<ul>
<li>Why does everyone want to be special? Is it inferior to be normal?</li>
<li>Embracing being normal, instead of striving for a special status, is a form of courage. In the grander sense, isn&#39;t everyone normal?</li>
</ul><br />
<h2 style='display: inline' id='freedom-is-being-disliked'>Freedom is Being Disliked</h2><br />
<br />
<ul>
<li>The price of true freedom is to be disliked by other people. It is a sign that you are living in accordance with your own principles.</li>
</ul><br />
<h2 style='display: inline' id='the-meaning-of-life'>The Meaning of Life</h2><br />
<br />
<ul>
<li>Life has no inherent meaning. It is up to each individual to assign meaning to their own life.</li>
<li>Do not be afraid of being disliked by others for living your life according to the meaning you create.</li>
<li>You have the power to change yourself, and in doing so, you change your world. No one else can change it for you.</li>
</ul><br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other book notes of mine are:</span><br />
<br />
<a class='textlink' href='./2025-11-02-the-courage-to-be-disliked-book-notes.html'>2025-11-02 &#39;The Courage To Be Disliked&#39; book notes (You are currently reading this)</a><br />
<a class='textlink' href='./2025-06-07-a-monks-guide-to-happiness-book-notes.html'>2025-06-07 &#39;A Monk&#39;s Guide to Happiness&#39; book notes</a><br />
<a class='textlink' href='./2025-04-19-when-book-notes.html'>2025-04-19 &#39;When: The Scientific Secrets of Perfect Timing&#39; book notes</a><br />
<a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 &#39;Staff Engineer&#39; book notes</a><br />
<a class='textlink' href='./2024-07-07-the-stoic-challenge-book-notes.html'>2024-07-07 &#39;The Stoic Challenge&#39; book notes</a><br />
<a class='textlink' href='./2024-05-01-slow-productivity-book-notes.html'>2024-05-01 &#39;Slow Productivity&#39; book notes</a><br />
<a class='textlink' href='./2023-11-11-mind-management-book-notes.html'>2023-11-11 &#39;Mind Management&#39; book notes</a><br />
<a class='textlink' href='./2023-07-17-career-guide-and-soft-skills-book-notes.html'>2023-07-17 &#39;Software Developers Career Guide and Soft Skills&#39; book notes</a><br />
<a class='textlink' href='./2023-05-06-the-obstacle-is-the-way-book-notes.html'>2023-05-06 &#39;The Obstacle is the Way&#39; book notes</a><br />
<a class='textlink' href='./2023-04-01-never-split-the-difference-book-notes.html'>2023-04-01 &#39;Never split the difference&#39; book notes</a><br />
<a class='textlink' href='./2023-03-16-the-pragmatic-programmer-book-notes.html'>2023-03-16 &#39;The Pragmatic Programmer&#39; book notes</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Perl New Features and Foostats</title>
        <link href="https://foo.zone/gemfeed/2025-11-02-perl-new-features-and-foostats.html" />
        <id>https://foo.zone/gemfeed/2025-11-02-perl-new-features-and-foostats.html</id>
        <updated>2025-11-01T16:10:35+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Perl recently reached rank 10 in the TIOBE index. That headline made me write this blog post as I was developing the Foostats script for simple analytics of my personal websites and Gemini capsules (e.g. `foo.zone`) and there were a couple of new features added to the Perl language over the last releases. The book *Perl New Features* by brian d foy documents the changes well; this post shows how those features look in a real program that runs every morning for my stats generation.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='perl-new-features-and-foostats'>Perl New Features and Foostats</h1><br />
<br />
<span class='quote'>Published at 2025-11-01T16:10:35+02:00</span><br />
<br />
<span>Perl recently reached rank 10 in the TIOBE index. That headline made me write this blog post as I was developing the Foostats script for simple analytics of my personal websites and Gemini capsules (e.g. <span class='inlinecode'>foo.zone</span>) and there were a couple of new features added to the Perl language over the last releases. The book *Perl New Features* by brian d foy documents the changes well; this post shows how those features look in a real program that runs every morning for my stats generation.</span><br />
<br />
<a class='textlink' href='https://developers.slashdot.org/story/25/09/14/0134239/is-perl-the-worlds-10th-most-popular-programming-language'>Perl re-enters the top ten</a><br />
<a class='textlink' href='https://perlschool.com/books/perl-new-features/'>Perl New Features by Joshua McAdams and brian d foy</a><br />
<br />
<pre>
$b="24P7cP3dP31P3bPaP28P24P64P31P2cP24P64P32P2cP24P73P2cP24P67P2cP24P7
2P29P3dP28P22P31P30P30P30P30P22P2cP22P31P30P30P30P30P30P22P2cP22P4aP75
P7                                                                  3P
74                                                                  P2
0P  41P6eP6fP74P     68P65P72P20P50 P65P72P6cP2     0P48P           61
P6  3P6bP65P72P22P   29P3bPaP40P6dP 3dP73P70P6cP6   9P74P           20
P2  fP2fP    2cP22P  2cP2eP3aP21P2  bP2aP    30P4f  P40P2           2P
3b  PaP24      P6eP3 dP6c           P65P6      eP67 P74P6           8P
20  P24P7      3P3bP aP24           P75P3      dP22 P20P2           2P
78  P24P6      eP3bP aPaP           70P72      P69P 6eP74           P2
0P  22P5c    P6eP20  P20P           24P75    P5cP7  2P22P           3b
Pa  PaP66P6fP72P2    8P24P7aP20P    3dP20P31P3bP    20P24           P7
aP  3cP3dP24P6       eP3bP20P24     P7aP2bP2bP      29P20           P7
bP  aPaP9            P77P28P24P6    4P31P29P        3bPaP           9P
24  P72P3            dP69           P6eP74P28       P72P6           1P
6e  P64P2            8P24           P6eP2 9P29P     3bPaP           9P
24  P67P3            dP73           P75P6  2P73P    74P72           P2
0P  24P73            P2cP24P72P2cP  31P3b   PaP9P   24P67P20P3fP20  P6
4P  6fP20            P9P7bP20PaP9P9 P9P9P    9P66P  6fP72P20P28P24  P6
bP  3dP30            P3bP24P6bP3cP3 9P3bP    24P6bP 2bP2bP29P20P7b  Pa
P9                                                                  P9
P9                                                                  P9
P9  P9P73P75P6     2P73   P74P  72P2       8P24P75P2c     P24P72    P2
cP  31P29P3dP24P   6dP5   bP24  P6bP       5dP3bP20Pa   P9P9  P9P9  P9
P9  P70P    72P69  P6eP   74P2  0P22       P20P20P24P  75P      5cP 72
P2  2P3b      PaP9 P9P9   P9P9  P9P7       7P28       P24        P6 4P
32  P29P      3bPa P9P9   P9P9  P9P7       dPaP       9P9           P9
P9  P9P7      3P75 P62P   73P7  4P72       P28P        24P7         5P
2c  P24P    72P2c  P31P   29P3  dP24       P67P3bP20P   aP9P9       P9
P9  P7dP20PaP9P    9P3a   P20P  72P6       5P64P6fP3b      PaP9     P7
3P  75P62P73P      74P7   2P28  P24P       73P2cP24P7        2P2c   P3
1P  29P3dP2        2P30   P22P  3bPa       P9P7                0P7  2P
69  P6eP74P2       0P22   P20P  20P2       4P75                 P5c P7
2P  22P3 bPaPa     P7dP   aPaP  77P2       0P28                 P24 P6
4P  32P2  9P3bP    aP70   P72P  69P6       eP74       P2        0P2 2P
20  P20P   24P75   P20P21P5cP7  2P22P3bPaP 73P6cP65P6 5P7     0P20  P3
2P  3bPa    P70P7  2P69P6eP74P  20P22P20P2 0P24P75P20  P21P  5cP6   eP
22  P3bP     aPaP7  3P75P62P2   0P77P20P7b PaP9P24P6c    P3dP73     P6
8P                                                                  69
P6                                                                  6P
74P3bPaP9P66P6fP72P28P24P6aP3dP30P3bP24P6aP3cP24P6cP3bP24P6aP2bP2bP29P
7bP7dPaP7dP";$b=~s/\s//g;split /P/,$b;foreach(@_){$c.=chr hex};eval $c

The above Perl script prints out "Just Another Perl Hacker !" in an
animation of sorts.

</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#perl-new-features-and-foostats'>Perl New Features and Foostats</a></li>
<li>⇢ <a href='#motivation'>Motivation</a></li>
<li>⇢ <a href='#why-i-used-perl'>Why I used Perl</a></li>
<li>⇢ <a href='#inside-foostats'>Inside Foostats</a></li>
<li>⇢ ⇢ <a href='#log-pipeline'>Log pipeline</a></li>
<li>⇢ ⇢ <a href='#foooddstxt'><span class='inlinecode'>fooodds.txt</span></a></li>
<li>⇢ ⇢ <a href='#feed-kinds'>Feed kinds</a></li>
<li>⇢ ⇢ <a href='#aggregation-and-output'>Aggregation and output</a></li>
<li>⇢ ⇢ <a href='#command-line-entry-points'>Command-line entry points</a></li>
<li>⇢ <a href='#packages-as-real-blocks'>Packages as real blocks</a></li>
<li>⇢ ⇢ <a href='#scoped-packages'>Scoped packages</a></li>
<li>⇢ <a href='#postfix-dereferencing-keeps-data-structures-tidy'>Postfix dereferencing keeps data structures tidy</a></li>
<li>⇢ ⇢ <a href='#clear-dereferencing'>Clear dereferencing</a></li>
<li>⇢ <a href='#say-is-the-default-voice-now'><span class='inlinecode'>say</span> is the default voice now</a></li>
<li>⇢ <a href='#lexical-subs-promote-local-reasoning'>Lexical subs promote local reasoning</a></li>
<li>⇢ <a href='#reference-aliasing-makes-intent-explicit'>Reference aliasing makes intent explicit</a></li>
<li>⇢ <a href='#persistent-state-without-globals'>Persistent state without globals</a></li>
<li>⇢ ⇢ <a href='#rate-limiting-state'>Rate limiting state</a></li>
<li>⇢ ⇢ <a href='#de-duplicated-logging'>De-duplicated logging</a></li>
<li>⇢ <a href='#subroutine-signatures'>Subroutine signatures</a></li>
<li>⇢ <a href='#defined-or-assignment-for-defaults-without-boilerplate'>Defined-or assignment for defaults without boilerplate</a></li>
<li>⇢ <a href='#cleanup-with-defer'>Cleanup with <span class='inlinecode'>defer</span></a></li>
<li>⇢ <a href='#builtins-and-booleans'>Builtins and booleans</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='motivation'>Motivation</h2><br />
<br />
<span>I&#39;ve been running <span class='inlinecode'>foo.zone</span> for a while now, but I&#39;ve never looked into visitor statistics or analytics. I value privacy—not just my own, but also the privacy of others (the visitors of this site) — so I hesitated to use any off-the-shelf analytics plugins. All I wanted to collect were:</span><br />
<br />
<ul>
<li>Which blog posts had the most (unique) visitors</li>
<li>Exclude, if possible, any bots and scrapers from the stats</li>
<li>Track only anonymized IP addresses, never store raw addresses</li>
</ul><br />
<span>With Foostats I&#39;ve created a Perl script which does that for my highly opinionated website/blog setup, which consists of:</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html'>Gemtexter, my static site and Gemini capsule generator</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html'>How I host this site highly-available using OpenBSD</a><br />
<br />
<h2 style='display: inline' id='why-i-used-perl'>Why I used Perl</h2><br />
<br />
<span>Even though nowadays I code more in Go and Ruby, I stuck with Perl for Foostats for four simple reasons:</span><br />
<br />
<ul>
<li>I wanted an excuse to explore the newer features of my first programming love.</li>
<li>Sometimes, I miss Perl.</li>
<li>Perl ships with OpenBSD (the operating system on which my sites run) by default.</li>
<li>It really does live up to its Practical Extraction and Report Language (that&#39;s what the name Perl means) for this kind of log grinding I did with Foostats.</li>
</ul><br />
<h2 style='display: inline' id='inside-foostats'>Inside Foostats</h2><br />
<br />
<span>Foostats is simply a log file analyser, which analyses the OpenBSD httpd and relayd logs.</span><br />
<br />
<a class='textlink' href='https://man.openbsd.org/httpd.8'>https://man.openbsd.org/httpd.8</a><br />
<a class='textlink' href='https://man.openbsd.org/relayd.8'>https://man.openbsd.org/relayd.8</a><br />
<br />
<h3 style='display: inline' id='log-pipeline'>Log pipeline</h3><br />
<br />
<span>A CRON job starts Foostats, reads OpenBSD httpd and relayd access logs, and produces the numbers published at <span class='inlinecode'>https://stats.foo.zone</span> and <span class='inlinecode'>https://stats.foo.zone</span>. The dashboards are humble because traffic on my sites is still light, yet the trends are interesting for spotting patterns. The script is opinionated (I am repeating myself here, I know), and I will probably be the only one ever using it for my own sites. However, the code demonstrates how Perl&#39;s newer features help keep a small script like this exciting and fun!</span><br />
<br />
<a class='textlink' href='https://stats.foo.zone'>Foostats (HTTP)</a><br />
<a class='textlink' href='https://stats.foo.zone'>Foostats (Gemini)</a><br />
<br />
<span>On OpenBSD, I&#39;ve configured the job via the <span class='inlinecode'>daily.local</span> on both of my OpenBSD servers (<span class='inlinecode'>fishfinger.buetow.org</span> and <span class='inlinecode'>blowfish.buetow.org</span> - note one is the master server, the other is the standby server, but the script runs on both and the stats are merged later in the process):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>fishfinger$ grep foostats /etc/daily.<b><u><font color="#000000">local</font></u></b>
perl /usr/local/bin/foostats.pl --parse-logs --replicate --report
</pre>
<br />
<span>Internally, <span class='inlinecode'>Foostats::Logreader</span> parses each line of the log files <span class='inlinecode'>/var/log/daemon*</span> and <span class='inlinecode'>/var/www/logs/access_log*</span>, turns timestamps into <span class='inlinecode'>YYYYMMDD/HHMMSS</span> values, hashes IP addresses with SHA3 (for anonymization), and hands a normalized event to <span class='inlinecode'>Foostats::Filter</span>. The filter compares the URI against entries in <span class='inlinecode'>fooodds.txt</span>, tracks how many times an IP address requests within the exact second, and drops anything suspicious (e.g., from web crawlers or malicious attackers). Valid events reach <span class='inlinecode'>Foostats::Aggregator</span>, which counts requests per protocol, records unique visitors for the Gemtext and Atom feeds, and remembers page-level IP sets. <span class='inlinecode'>Foostats::FileOutputter</span> writes the result as gzipped JSON files—one per day and per protocol—with IPv4/IPv6 splits, filtered counters, feed readership, and hashes for long URLs.</span><br />
<br />
<h3 style='display: inline' id='foooddstxt'><span class='inlinecode'>fooodds.txt</span></h3><br />
<br />
<span><span class='inlinecode'>fooodds.txt</span> is a plain text list of substrings of URLs to be blocked, making it quick to shut down web crawlers. Foostats also detects rapid requests (an indicator of excessive crawling) and blocks the IP. Audit lines are written to <span class='inlinecode'>/var/log/fooodds</span>, which can later be reviewed for false or true positives (I do this around once a month). The <span class='inlinecode'>Justfile</span> even has a <span class='inlinecode'>gather-fooodds</span> target that collects suspicious paths from remote logs so new patterns can be added quickly.</span><br />
<br />
<h3 style='display: inline' id='feed-kinds'>Feed kinds</h3><br />
<br />
<span>There are different kinds of feeds being tracked by Foostats:</span><br />
<br />
<ul>
<li>The Atom web-feed</li>
<li>The same feed via Gemini</li>
<li>The Gemfeed (a special format popular in the Geminispace)</li>
</ul><br />
<h3 style='display: inline' id='aggregation-and-output'>Aggregation and output</h3><br />
<br />
<span>As mentioned, Foostats merges the stats from both hosts, master and standby. For the master-standby setup description, read:</span><br />
<br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>KISS high-availability with OpenBSD</a><br />
<br />
<span>Those gzipped files land in <span class='inlinecode'>stats/</span>. From there, <span class='inlinecode'>Foostats::Replicator</span> can pull matching files from the partner host (<span class='inlinecode'>fishfinger</span> or <span class='inlinecode'>blowfish</span>) so the view covers both servers, <span class='inlinecode'>Foostats::Merger</span> combines them into daily summaries, and <span class='inlinecode'>Foostats::Reporter</span> rebuilds Gemtext and HTML reports.</span><br />
<br />
<span>Those are the raw stats files:</span><br />
<br />
<a class='textlink' href='https://blowfish.buetow.org/foostats/'>https://blowfish.buetow.org/foostats/</a><br />
<a class='textlink' href='https://fishfinger.buetow.org/foostats/'>https://fishfinger.buetow.org/foostats/</a><br />
<br />
<span>These are the 30-day reports generated (already linked earlier in this post, but adding here again for clarity):</span><br />
<br />
<a class='textlink' href='https://stats.foo.zone'>stats.foo.zone Gemini capsule dashboard</a><br />
<a class='textlink' href='https://stats.foo.zone'>stats.foo.zone HTTP dashboard</a><br />
<br />
<h3 style='display: inline' id='command-line-entry-points'>Command-line entry points</h3><br />
<br />
<span><span class='inlinecode'>foostats_main</span> is the command entry point. <span class='inlinecode'>--parse-logs</span> refreshes the gzipped files, <span class='inlinecode'>--replicate</span> runs the cross-host sync, and <span class='inlinecode'>--report</span> rebuilds the HTML and Gemini report pages. <span class='inlinecode'>--all</span> performs everything in one go. Defaults point to <span class='inlinecode'>/var/www/htdocs/buetow.org/self/foostats</span> for data, <span class='inlinecode'>/var/gemini/stats.foo.zone</span> for Gemtext output, and <span class='inlinecode'>/var/www/htdocs/gemtexter/stats.foo.zone</span> for HTML output. Replication always forces the three most recent days&#39; worth of data across HTTPS and leaves older files untouched to save bandwidth.</span><br />
<br />
<span>The complete source lives on Codeberg here:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/foostats'>Foostats on Codeberg</a><br />
<br />
<span>Now let&#39;s go to some new Perl features:</span><br />
<br />
<h2 style='display: inline' id='packages-as-real-blocks'>Packages as real blocks</h2><br />
<br />
<h3 style='display: inline' id='scoped-packages'>Scoped packages</h3><br />
<br />
<span>Recent Perl versions allow the block form <span class='inlinecode'>package Foo { ... }</span>. Foostats uses it for every package. Imports stay local to the block, helper subs do not leak into the global symbol table, and configuration happens where the code needs it.</span><br />
<br />
<span>The old way:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> foo;

<b><u><font color="#000000">sub</font></u></b> hello {
    <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello from package foo\n"</font>;
}

<b><u><font color="#000000">package</font></u></b> bar;

<b><u><font color="#000000">sub</font></u></b> hello {
    <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello from package bar\n"</font>;
}
</pre>
<br />
<span>But now it is also possible to do this:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> foo {
    <b><u><font color="#000000">sub</font></u></b> hello {
        <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello from package foo\n"</font>;
    }
}

<b><u><font color="#000000">package</font></u></b> bar {
    <b><u><font color="#000000">sub</font></u></b> hello {
        <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello from package bar\n"</font>;
    }
}
</pre>
<br />
<h2 style='display: inline' id='postfix-dereferencing-keeps-data-structures-tidy'>Postfix dereferencing keeps data structures tidy</h2><br />
<br />
<h3 style='display: inline' id='clear-dereferencing'>Clear dereferencing</h3><br />
<br />
<span>The script handles nested hashes and arrays. Postfix dereferencing (<span class='inlinecode'>$hash-&gt;%*</span>, <span class='inlinecode'>$array-&gt;@*</span>) keeps that readable.</span><br />
<br />
<span>E.g. instead of having to write:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">for</font></u></b> <b><u><font color="#000000">my</font></u></b> $elem (@{$array_ref}) {
    <b><u><font color="#000000">print</font></u></b> <font color="#808080">"$elem\n"</font>;
}
</pre>
<br />
<span>one can now do:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">for</font></u></b> <b><u><font color="#000000">my</font></u></b> $elem ($array_ref-&gt;@*) {
    <b><u><font color="#000000">print</font></u></b> <font color="#808080">"$elem\n"</font>;
}
</pre>
<br />
<span>You see that this feature becomes increasingly useful with nested data structures, e.g. to print all keys of the nested hash:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">print</font></u></b> <b><u><font color="#000000">for</font></u></b> <b><u><font color="#000000">keys</font></u></b> $hash-&gt;{stats}-&gt;%*;
</pre>
<br />
<span>Loops over like <span class='inlinecode'>$stats-&gt;{page_ips}-&gt;{urls}-&gt;%*</span> or <span class='inlinecode'>$merge{$key}-&gt;{$_}-&gt;%*</span> show which level of the structure is in play. The merger in Foostats updates host and URL statistics without building temporary arrays, and the reporter code mirrors the layout of the final tables. Before postfix dereferencing, the same code relied on braces within braces and was harder to read.</span><br />
<br />
<h2 style='display: inline' id='say-is-the-default-voice-now'><span class='inlinecode'>say</span> is the default voice now</h2><br />
<br />
<span><span class='inlinecode'>say</span> became the default once the script switched to <span class='inlinecode'>use v5.38;</span>. It adds a newline to every message printed, comparable to Ruby&#39;s <span class='inlinecode'>puts</span>, making log messages like "Processing $path" or "Writing report to $report_path" cleaner:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">use</font></u></b> v5.<font color="#000000">38</font>;

<b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello, world!\n"</font>;    <i><font color="silver"># old way</font></i>
say <font color="#808080">"Hello, world!"</font>;        <i><font color="silver"># new way</font></i>
</pre>
<br />
<h2 style='display: inline' id='lexical-subs-promote-local-reasoning'>Lexical subs promote local reasoning</h2><br />
<br />
<span>Lexical subroutines keep helpers close to the code that needs them. In <span class='inlinecode'>Foostats::Logreader::parse_web_logs</span>, functions such as <span class='inlinecode'>my sub parse_date</span> and <span class='inlinecode'>my sub open_file</span> live only inside that scope.</span><br />
<br />
<span>This is an example of a lexical sub named <span class='inlinecode'>trim</span>, which is only visible within the outer sub named <span class='inlinecode'>process_lines</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">use</font></u></b> v5.<font color="#000000">38</font>;

<b><u><font color="#000000">sub</font></u></b> process_lines (@lines) {
    <b><u><font color="#000000">my</font></u></b> <b><u><font color="#000000">sub</font></u></b> trim ($str) {
        $str =~ <b><u><font color="#000000">s</font></u></b><font color="#808080">/^\s+|\s+$//</font>gr;
    }
    <b><u><font color="#000000">return</font></u></b> [ <b><u><font color="#000000">map</font></u></b> { trim($_) } @lines ];
}

<b><u><font color="#000000">my</font></u></b> @raw = (<font color="#808080">"  foo  "</font>, <font color="#808080">" bar"</font>, <font color="#808080">"baz "</font>);
<b><u><font color="#000000">my</font></u></b> $cleaned = process_lines(@raw);
say <b><u><font color="#000000">for</font></u></b> @$cleaned; <i><font color="silver"># prints "foo", "bar", "baz"</font></i>
</pre>
<br />
<h2 style='display: inline' id='reference-aliasing-makes-intent-explicit'>Reference aliasing makes intent explicit</h2><br />
<br />
<span>Reference aliasing can be enabled with <span class='inlinecode'>use feature qw(refaliasing)</span> and helps communicate intent more clearly (if you remember the Perl syntax, of course—otherwise, it can look rather cryptic). The filter starts with <span class='inlinecode'>\my $uri_path = \$event-&gt;{uri_path}</span> so any later modification touches the original event. This is an example with ref aliasing in action:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">use</font></u></b> feature <b><u><font color="#000000">qw</font></u></b>(refaliasing);

<b><u><font color="#000000">my</font></u></b> $hash = { foo =&gt; <font color="#000000">42</font> };
\<b><u><font color="#000000">my</font></u></b> $foo = \$hash-&gt;{foo};

$foo = <font color="#000000">99</font>;
<b><u><font color="#000000">print</font></u></b> $hash-&gt;{foo}; <i><font color="silver"># prints 99</font></i>
</pre>
<br />
<span>The aggregator in Foostats aliases <span class='inlinecode'>$self-&gt;{stats}{$date_key}</span> before updating counters, so the structure remains intact. Combined with subroutine signatures, this makes it obvious when a piece of data is shared instead of copied, preventing silent bugs. This enables having shorter names for long nested data structures.</span><br />
<br />
<h2 style='display: inline' id='persistent-state-without-globals'>Persistent state without globals</h2><br />
<br />
<span>A Perl state variable is declared with <span class='inlinecode'>state $var</span> and retains its value between calls to the enclosing subroutine. Foostats uses that for rate limiting and de-duplicated logging.</span><br />
<br />
<span>This is a small example demonstrating the use of a state variable in Perl:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">sub</font></u></b> counter {
    state $count = <font color="#000000">0</font>;
    $count++;
    <b><u><font color="#000000">return</font></u></b> $count;
}

say counter(); <i><font color="silver"># 1</font></i>
say counter(); <i><font color="silver"># 2</font></i>
say counter(); <i><font color="silver"># 3</font></i>
</pre>
<br />
<span>Hash and array state variables have been supported since <span class='inlinecode'>state</span> arrived in Perl 5.10. Scalar state variables were already supported previously.</span><br />
<br />
<h3 style='display: inline' id='rate-limiting-state'>Rate limiting state</h3><br />
<br />
<span>In Foostats, <span class='inlinecode'>state</span> variables store run-specific state without using package globals. <span class='inlinecode'>state %blocked</span> remembers IP hashes that already triggered the odd-request filter, and <span class='inlinecode'>state $last_time</span> and <span class='inlinecode'>state %count</span> track how many requests an IP makes in the exact second.</span><br />
<br />
<h3 style='display: inline' id='de-duplicated-logging'>De-duplicated logging</h3><br />
<br />
<span><span class='inlinecode'>state %dedup</span> keeps the log output of the suspicious calls to one warning per URI. Early versions utilized global hashes for the same tasks, producing inconsistent results during tests. Switching to <span class='inlinecode'>state</span> removed those edge cases.</span><br />
<br />
<h2 style='display: inline' id='subroutine-signatures'>Subroutine signatures</h2><br />
<br />
<span>Perl now supports subroutine signatures like other modern languages do. Foostats uses them everywhere. Examples:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Old way</font></i>
<b><u><font color="#000000">sub</font></u></b> greet_old { <b><u><font color="#000000">my</font></u></b> $name = <b><u><font color="#000000">shift</font></u></b>; <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello, $name!\n"</font> }

<i><font color="silver"># Another old way</font></i>
<b><u><font color="#000000">sub</font></u></b> greet_old2 ($) { <b><u><font color="#000000">my</font></u></b> $name = <b><u><font color="#000000">shift</font></u></b>; <b><u><font color="#000000">print</font></u></b> <font color="#808080">"Hello, $name!\n"</font> }

<i><font color="silver"># New way</font></i>
<b><u><font color="#000000">sub</font></u></b> greet ($name) { say <font color="#808080">"Hello, $name!"</font>; }

greet(<font color="#808080">"Alice"</font>); <i><font color="silver"># prints "Hello, Alice!"</font></i>
</pre>
<br />
<span>In Foostats, constructors declare <span class='inlinecode'>sub new ($class, $odds_file, $log_path)</span>, anonymous callbacks expose <span class='inlinecode'>sub ($event)</span>, and helper subs list the values they expect, e.g.:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">my</font></u></b> $anon = <b><u><font color="#000000">sub</font></u></b> ($name) {
    say <font color="#808080">"Hello, $name!"</font>;
};

$anon-&gt;(<font color="#808080">"World"</font>); <i><font color="silver"># prints "Hello, World!"</font></i>
</pre>
<br />
<h2 style='display: inline' id='defined-or-assignment-for-defaults-without-boilerplate'>Defined-or assignment for defaults without boilerplate</h2><br />
<br />
<span>The operator <span class='inlinecode'>//=</span> keeps configuration and counters simple. Environment variables may be missing when CRON runs the script, so <span class='inlinecode'>//=</span>, combined with signatures, sets defaults without warnings. Example use of that operator:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">my</font></u></b> $foo;
$foo <font color="#808080">//</font>= <font color="#000000">42</font>;
say $foo; <i><font color="silver"># prints 42</font></i>

$foo <font color="#808080">//</font>= <font color="#000000">99</font>;
say $foo; <i><font color="silver"># still prints 42, because $foo was already defined</font></i>
</pre>
<br />
<h2 style='display: inline' id='cleanup-with-defer'>Cleanup with <span class='inlinecode'>defer</span></h2><br />
<br />
<span>Even though not used in Foostats, this feature (similar to Go&#39;s defer) is neat to have in Perl now.</span><br />
<br />
<span>The <span class='inlinecode'>defer</span> block (<span class='inlinecode'>use feature &#39;defer"</span>) schedules a piece of code to run when the current scope exits, regardless of how it exits (e.g. normal return, exception). This is perfect for ensuring resources, such as file handles, are closed.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">use</font></u></b> feature <b><u><font color="#000000">qw</font></u></b>(defer);

<b><u><font color="#000000">sub</font></u></b> parse_log_file ($path) {
    <b><u><font color="#000000">open</font></u></b> <b><u><font color="#000000">my</font></u></b> $fh, <font color="#808080">'&lt;'</font>, $path or <b><u><font color="#000000">die</font></u></b> <font color="#808080">"Cannot open $path: $!"</font>;
    defer { <b><u><font color="#000000">close</font></u></b> $fh };

    <b><u><font color="#000000">while</font></u></b> (<b><u><font color="#000000">my</font></u></b> $line = <font color="#808080">&lt;$fh&gt;</font>) {
        <i><font color="silver"># ... parsing logic that might throw an exception ...</font></i>
    }
    <i><font color="silver"># $fh is automatically closed here</font></i>
}
</pre>
<br />
<span>This pattern replaces manual <span class='inlinecode'>close</span> calls in every exit path of the subroutine and is more robust than relying solely on object destructors.</span><br />
<br />
<h2 style='display: inline' id='builtins-and-booleans'>Builtins and booleans</h2><br />
<br />
<span>The script also utilizes other modern additions that often go unnoticed. <span class='inlinecode'>use builtin qw(true false);</span> combined with <span class='inlinecode'>experimental::builtin</span> provides more real boolean values.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>I want to code more in Perl again. The newer features make it a joy to write small scripts like Foostats. If you haven&#39;t looked at Perl in a while, give it another try! The main thing which holds me back from writing more Perl is the lack of good tooling. For example, there is no proper LSP and tree sitter support available, which would work as good as the ones available for Go and Ruby.</span><br />
<br />
<span class='quote'>A reader pointed out that there&#39;s now a third-party Perl Tree-sitter implementation one could use:</span><br />
<br />
<a class='textlink' href='https://github.com/tree-sitter-perl/tree-sitter-perl'>https://github.com/tree-sitter-perl/tree-sitter-perl</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2025-11-02-perl-new-features-and-foostats.html'>2025-11-02 Perl New Features and Foostats (You are currently reading this)</a><br />
<a class='textlink' href='./2023-05-01-unveiling-guprecords:-uptime-records-with-raku.html'>2023-05-01 Unveiling <span class='inlinecode'>guprecords.raku</span>: Global Uptime Records with Raku</a><br />
<a class='textlink' href='./2022-05-27-perl-is-still-a-great-choice.html'>2022-05-27 Perl is still a great choice</a><br />
<a class='textlink' href='./2011-05-07-perl-daemon-service-framework.html'>2011-05-07 Perl Daemon (Service Framework)</a><br />
<a class='textlink' href='./2008-06-26-perl-poetry.html'>2008-06-26 Perl Poetry</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Key Takeaways from The Well-Grounded Rubyist</title>
        <link href="https://foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html" />
        <id>https://foo.zone/gemfeed/2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html</id>
        <updated>2025-10-11T15:25:14+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Some time ago, I wrote about my journey into Ruby and how 'The Well-Grounded Rubyist' helped me to get a better understanding of the language. I took a lot of notes while reading the book, and I think it's time to share some of them. This is not a comprehensive review, but rather a collection of interesting tidbits and concepts that stuck with me.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='key-takeaways-from-the-well-grounded-rubyist'>Key Takeaways from The Well-Grounded Rubyist</h1><br />
<br />
<span class='quote'>Published at 2025-10-11T15:25:14+03:00</span><br />
<br />
<span>Some time ago, I wrote about my journey into Ruby and how "The Well-Grounded Rubyist" helped me to get a better understanding of the language. I took a lot of notes while reading the book, and I think it&#39;s time to share some of them. This is not a comprehensive review, but rather a collection of interesting tidbits and concepts that stuck with me.</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#key-takeaways-from-the-well-grounded-rubyist'>Key Takeaways from The Well-Grounded Rubyist</a></li>
<li>⇢ <a href='#the-object-model'>The Object Model</a></li>
<li>⇢ ⇢ <a href='#everything-is-an-object-almost'>Everything is an object (almost)</a></li>
<li>⇢ ⇢ <a href='#the-self-keyword'>The <span class='inlinecode'>self</span> keyword</a></li>
<li>⇢ ⇢ <a href='#singleton-methods'>Singleton Methods</a></li>
<li>⇢ ⇢ <a href='#classes-are-objects'>Classes are Objects</a></li>
<li>⇢ <a href='#control-flow-and-methods'>Control Flow and Methods</a></li>
<li>⇢ ⇢ <a href='#case-and-the--operator'><span class='inlinecode'>case</span> and the <span class='inlinecode'>===</span> operator</a></li>
<li>⇢ ⇢ <a href='#blocks-and-yield'>Blocks and <span class='inlinecode'>yield</span></a></li>
<li>⇢ <a href='#fun-with-data-types'>Fun with Data Types</a></li>
<li>⇢ ⇢ <a href='#symbols'>Symbols</a></li>
<li>⇢ ⇢ <a href='#arrays-and-hashes'>Arrays and Hashes</a></li>
<li>⇢ <a href='#final-thoughts'>Final Thoughts</a></li>
</ul><br />
<a class='textlink' href='./2021-07-04-the-well-grounded-rubyist.html'>My first post about the book.</a><br />
<br />
<a href='./the-well-grounded-rubyist/book-cover.jpg'><img src='./the-well-grounded-rubyist/book-cover.jpg' /></a><br />
<br />
<h2 style='display: inline' id='the-object-model'>The Object Model</h2><br />
<br />
<span>One of the most fascinating aspects of Ruby is its object model. The book does a great job of explaining the details.</span><br />
<br />
<h3 style='display: inline' id='everything-is-an-object-almost'>Everything is an object (almost)</h3><br />
<br />
<span>In Ruby, most things are objects. This includes numbers, strings, and even classes themselves. This has some interesting consequences. For example, you can&#39;t use <span class='inlinecode'>i++</span> like in C or Java. Integers are immutable objects. <span class='inlinecode'>1</span> is always the same object. <span class='inlinecode'>1 + 1</span> returns a new object, <span class='inlinecode'>2</span>.</span><br />
<br />
<h3 style='display: inline' id='the-self-keyword'>The <span class='inlinecode'>self</span> keyword</h3><br />
<br />
<span>There is always a current object, <span class='inlinecode'>self</span>. If you call a method without an explicit receiver, it&#39;s called on <span class='inlinecode'>self</span>. For example, <span class='inlinecode'>puts "hello"</span> is actually <span class='inlinecode'>self.puts "hello"</span>.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># At the top level, self is the main object</font></i>
p <b><u><font color="#000000">self</font></u></b>
<i><font color="silver"># =&gt; main</font></i>
p <b><u><font color="#000000">self</font></u></b>.<b><u><font color="#000000">class</font></u></b>
<i><font color="silver"># =&gt; Object</font></i>

<b><u><font color="#000000">def</font></u></b> foo
  <i><font color="silver"># Inside a method, self is the object that received the call</font></i>
  p <b><u><font color="#000000">self</font></u></b>
<b><u><font color="#000000">end</font></u></b>

foo
<i><font color="silver"># =&gt; main</font></i>
</pre>
<br />
<span>This code demonstrates how <span class='inlinecode'>self</span> changes depending on the context. At the top level, it&#39;s <span class='inlinecode'>main</span>, an instance of <span class='inlinecode'>Object</span>. When <span class='inlinecode'>foo</span> is called without a receiver, it&#39;s called on <span class='inlinecode'>main</span>.</span><br />
<br />
<h3 style='display: inline' id='singleton-methods'>Singleton Methods</h3><br />
<br />
<span>You can add methods to individual objects. These are called singleton methods.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>obj = <font color="#808080">"a string"</font>

<b><u><font color="#000000">def</font></u></b> obj.shout
  <b><u><font color="#000000">self</font></u></b>.upcase + <font color="#808080">"!"</font>
<b><u><font color="#000000">end</font></u></b>

p obj.shout
<i><font color="silver"># =&gt; "A STRING!"</font></i>

obj2 = <font color="#808080">"another string"</font>
<i><font color="silver"># obj2.shout would raise a NoMethodError</font></i>
</pre>
<br />
<span>Here, the <span class='inlinecode'>shout</span> method is only available on the <span class='inlinecode'>obj</span> object. This is a powerful feature for adding behavior to specific instances.</span><br />
<br />
<h3 style='display: inline' id='classes-are-objects'>Classes are Objects</h3><br />
<br />
<span>Classes themselves are objects, instances of the <span class='inlinecode'>Class</span> class. This means you can create classes dynamically.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>MyClass = Class.new <b><u><font color="#000000">do</font></u></b>
  <b><u><font color="#000000">def</font></u></b> say_hello
    puts <font color="#808080">"Hello from a dynamically created class!"</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>

instance = MyClass.new
instance.say_hello
<i><font color="silver"># =&gt; Hello from a dynamically created class!</font></i>
</pre>
<br />
<span>This shows how to create a new class and assign it to a constant. This is what happens behind the scenes when you use the <span class='inlinecode'>class</span> keyword.</span><br />
<br />
<h2 style='display: inline' id='control-flow-and-methods'>Control Flow and Methods</h2><br />
<br />
<span>The book clarified many things about how methods and control flow work in Ruby.</span><br />
<br />
<h3 style='display: inline' id='case-and-the--operator'><span class='inlinecode'>case</span> and the <span class='inlinecode'>===</span> operator</h3><br />
<br />
<span>The <span class='inlinecode'>case</span> statement is more powerful than I thought. It uses the <span class='inlinecode'>===</span> (threequals or case equality) operator for comparison, not <span class='inlinecode'>==</span>. Different classes can implement <span class='inlinecode'>===</span> in their own way.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># For ranges, it checks for inclusion</font></i>
p (<font color="#000000">1</font>..<font color="#000000">5</font>) === <font color="#000000">3</font> <i><font color="silver"># =&gt; true</font></i>

<i><font color="silver"># For classes, it checks if the object is an instance of the class</font></i>
p String === <font color="#808080">"hello"</font> <i><font color="silver"># =&gt; true</font></i>

<i><font color="silver"># For regexes, it checks for a match</font></i>
p /llo/ === <font color="#808080">"hello"</font> <i><font color="silver"># =&gt; true</font></i>

<b><u><font color="#000000">def</font></u></b> check(value)
  <b><u><font color="#000000">case</font></u></b> value
  <b><u><font color="#000000">when</font></u></b> String
    <font color="#808080">"It's a string"</font>
  <b><u><font color="#000000">when</font></u></b> (<font color="#000000">1</font>..<font color="#000000">10</font>)
    <font color="#808080">"It's a number between 1 and 10"</font>
  <b><u><font color="#000000">else</font></u></b>
    <font color="#808080">"Something else"</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>

p check(<font color="#000000">5</font>) <i><font color="silver"># =&gt; "It's a number between 1 and 10"</font></i>
</pre>
<br />
<h3 style='display: inline' id='blocks-and-yield'>Blocks and <span class='inlinecode'>yield</span></h3><br />
<br />
<span>Blocks are a cornerstone of Ruby. You can pass them to methods to customize their behavior. The <span class='inlinecode'>yield</span> keyword is used to call the block.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">def</font></u></b> my_iterator
  puts <font color="#808080">"Entering the method"</font>
  <b><u><font color="#000000">yield</font></u></b>
  puts <font color="#808080">"Back in the method"</font>
  <b><u><font color="#000000">yield</font></u></b>
<b><u><font color="#000000">end</font></u></b>

my_iterator { puts <font color="#808080">"Inside the block"</font> }
<i><font color="silver"># Entering the method</font></i>
<i><font color="silver"># Inside the block</font></i>
<i><font color="silver"># Back in the method</font></i>
<i><font color="silver"># Inside the block</font></i>
</pre>
<br />
<span>This simple iterator shows how <span class='inlinecode'>yield</span> transfers control to the block. You can also pass arguments to <span class='inlinecode'>yield</span> and get a return value from the block.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">def</font></u></b> with_return
  result = <b><u><font color="#000000">yield</font></u></b>(<font color="#000000">5</font>)
  puts <font color="#808080">"The block returned #{result}"</font>
<b><u><font color="#000000">end</font></u></b>

with_return { |n| n * <font color="#000000">2</font> }
<i><font color="silver"># =&gt; The block returned 10</font></i>
</pre>
<br />
<span>This demonstrates passing an argument to the block and using its return value.</span><br />
<br />
<h2 style='display: inline' id='fun-with-data-types'>Fun with Data Types</h2><br />
<br />
<span>Ruby&#39;s core data types are full of nice little features.</span><br />
<br />
<h3 style='display: inline' id='symbols'>Symbols</h3><br />
<br />
<span>Symbols are like immutable strings. They are great for keys in hashes because they are unique and memory-efficient.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Two strings with the same content are different objects</font></i>
p <font color="#808080">"foo"</font>.object_id
p <font color="#808080">"foo"</font>.object_id

<i><font color="silver"># Two symbols with the same content are the same object</font></i>
p :foo.object_id
p :foo.object_id

<i><font color="silver"># Modern hash syntax uses symbols as keys</font></i>
my_hash = { name: <font color="#808080">"Paul"</font>, language: <font color="#808080">"Ruby"</font> }
p my_hash[:name] <i><font color="silver"># =&gt; "Paul"</font></i>
</pre>
<br />
<span>This code highlights the difference between strings and symbols and shows the convenient hash syntax.</span><br />
<br />
<h3 style='display: inline' id='arrays-and-hashes'>Arrays and Hashes</h3><br />
<br />
<span>Arrays and hashes have a rich API. The <span class='inlinecode'>%w</span> and <span class='inlinecode'>%i</span> shortcuts for creating arrays of strings and symbols are very handy.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Array of strings</font></i>
p %w[one two three]
<i><font color="silver"># =&gt; ["one", "two", "three"]</font></i>

<i><font color="silver"># Array of symbols</font></i>
p %i[one two three]
<i><font color="silver"># =&gt; [:one, :two, :three]</font></i>
</pre>
<br />
<span>A quick way to create arrays. You can also retrieve multiple values at once.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>arr = [<font color="#000000">10</font>, <font color="#000000">20</font>, <font color="#000000">30</font>, <font color="#000000">40</font>, <font color="#000000">50</font>]
p arr.values_at(<font color="#000000">0</font>, <font color="#000000">2</font>, <font color="#000000">4</font>)
<i><font color="silver"># =&gt; [10, 30, 50]</font></i>

hash = { a: <font color="#000000">1</font>, b: <font color="#000000">2</font>, c: <font color="#000000">3</font> }
p hash.values_at(:a, :c)
<i><font color="silver"># =&gt; [1, 3]</font></i>
</pre>
<br />
<span>The <span class='inlinecode'>values_at</span> method is a concise way to get multiple elements.</span><br />
<br />
<h2 style='display: inline' id='final-thoughts'>Final Thoughts</h2><br />
<br />
<span>These are just a few of the many things I learned from "The Well-Grounded Rubyist". The book gave me a much deeper appreciation for the language and its design. If you are a Ruby programmer, I highly recommend it. Meanwhile, I also read the book "Programming Ruby 3.3", just I didn&#39;t have time to process my notes there yet. </span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other Ruby-related posts:</span><br />
<br />
<a class='textlink' href='./2026-03-02-rcm-ruby-configuration-management-dsl.html'>2026-03-02 RCM: The Ruby Configuration Management DSL</a><br />
<a class='textlink' href='./2025-10-11-key-takeaways-from-the-well-grounded-rubyist.html'>2025-10-11 Key Takeaways from The Well-Grounded Rubyist (You are currently reading this)</a><br />
<a class='textlink' href='./2021-07-04-the-well-grounded-rubyist.html'>2021-07-04 The Well-Grounded Rubyist</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</title>
        <link href="https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html" />
        <id>https://foo.zone/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.html</id>
        <updated>2025-10-02T11:27:19+03:00, last updated Tue 30 Dec 10:11:58 EET 2025</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</h1><br />
<br />
<span class='quote'>Published at 2025-10-02T11:27:19+03:00, last updated Tue 30 Dec 10:11:58 EET 2025</span><br />
<br />
<span>This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#important-note-gitops-migration'>Important Note: GitOps Migration</a></li>
<li>⇢ <a href='#updating'>Updating</a></li>
<li>⇢ <a href='#installing-k3s'>Installing k3s</a></li>
<li>⇢ ⇢ <a href='#generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</a></li>
<li>⇢ ⇢ <a href='#adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</a></li>
<li>⇢ <a href='#test-deployments'>Test deployments</a></li>
<li>⇢ ⇢ <a href='#test-deployment-to-kubernetes'>Test deployment to Kubernetes</a></li>
<li>⇢ ⇢ <a href='#test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</a></li>
<li>⇢ ⇢ <a href='#scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</a></li>
<li>⇢ <a href='#make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</a></li>
<li>⇢ ⇢ <a href='#openbsd-relayd-configuration'>OpenBSD relayd configuration</a></li>
<li>⇢ ⇢ <a href='#automatic-failover-when-f3s-cluster-is-down'>Automatic failover when f3s cluster is down</a></li>
<li>⇢ ⇢ <a href='#openbsd-httpd-fallback-configuration'>OpenBSD httpd fallback configuration</a></li>
<li>⇢ <a href='#exposing-services-via-lan-ingress'>Exposing services via LAN ingress</a></li>
<li>⇢ ⇢ <a href='#architecture-overview'>Architecture overview</a></li>
<li>⇢ ⇢ <a href='#installing-cert-manager'>Installing cert-manager</a></li>
<li>⇢ ⇢ <a href='#configuring-freebsd-relayd-for-lan-access'>Configuring FreeBSD relayd for LAN access</a></li>
<li>⇢ ⇢ <a href='#adding-lan-ingress-to-services'>Adding LAN ingress to services</a></li>
<li>⇢ ⇢ <a href='#client-side-dns-and-ca-setup'>Client-side DNS and CA setup</a></li>
<li>⇢ ⇢ <a href='#scaling-to-other-services'>Scaling to other services</a></li>
<li>⇢ ⇢ <a href='#tls-offloaders-summary'>TLS offloaders summary</a></li>
<li>⇢ <a href='#deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</a></li>
<li>⇢ ⇢ <a href='#prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</a></li>
<li>⇢ ⇢ <a href='#install-or-upgrade-the-chart'>Install (or upgrade) the chart</a></li>
<li>⇢ ⇢ <a href='#allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</a></li>
<li>⇢ ⇢ <a href='#pushing-and-pulling-images'>Pushing and pulling images</a></li>
<li>⇢ <a href='#example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</a></li>
<li>⇢ ⇢ <a href='#build-and-push-the-image'>Build and push the image</a></li>
<li>⇢ ⇢ <a href='#create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</a></li>
<li>⇢ ⇢ <a href='#deploy-the-chart'>Deploy the chart</a></li>
<li>⇢ <a href='#nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</a></li>
<li>⇢ ⇢ <a href='#helm-charts-currently-in-service'>Helm charts currently in service</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In this blog post, I am finally going to install k3s (the Kubernetes distribution I use) to the whole setup and deploy the first workloads (helm charts, and a private registry) to it.</span><br />
<br />
<a class='textlink' href='https://k3s.io'>https://k3s.io</a><br />
<br />
<h2 style='display: inline' id='important-note-gitops-migration'>Important Note: GitOps Migration</h2><br />
<br />
<span>**Note:** After publishing this blog post, the f3s cluster was migrated from imperative Helm deployments to declarative GitOps using ArgoCD. The Kubernetes manifests and Helm charts in the repository have been reorganized for ArgoCD-based continuous deployment.</span><br />
<br />
<span>**To view the exact manifests and charts as they existed when this blog post was written** (before the ArgoCD migration), check out the pre-ArgoCD revision:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ git clone https://codeberg.org/snonux/conf.git
$ cd conf
$ git checkout 15a86f3  <i><font color="silver"># Last commit before ArgoCD migration</font></i>
$ cd f3s/
</pre>
<br />
<span>**Current master branch** contains the ArgoCD-managed versions with:</span><br />
<ul>
<li>Application manifests organized under <span class='inlinecode'>argocd-apps/{monitoring,services,infra,test}/</span></li>
<li>Additional resources under <span class='inlinecode'>*/manifests/</span> directories (e.g., <span class='inlinecode'>prometheus/manifests/</span>)</li>
<li>Justfiles updated to trigger ArgoCD syncs instead of direct Helm commands</li>
</ul><br />
<span>The deployment concepts and architecture remain the same—only the deployment method changed from imperative (<span class='inlinecode'>helm install/upgrade</span>) to declarative (GitOps with ArgoCD).</span><br />
<br />
<h2 style='display: inline' id='updating'>Updating</h2><br />
<br />
<span>Before proceeding, I bring all systems involved up-to-date. On all three Rocky Linux 9 boxes <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>dnf update -y
reboot
</pre>
<br />
<span>On the FreeBSD hosts, I upgraded from FreeBSD 14.2 to 14.3-RELEASE, running this on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas freebsd-update fetch
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % doas freebsd-update -r <font color="#000000">14.3</font>-RELEASE upgrade
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas pkg update
paul@f0:~ % doas pkg upgrade
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % uname -a
FreeBSD f0.lan.buetow.org <font color="#000000">14.3</font>-RELEASE FreeBSD <font color="#000000">14.3</font>-RELEASE
        releng/<font color="#000000">14.3</font>-n<font color="#000000">271432</font>-8c9ce319fef7 GENERIC amd64
</pre>
<br />
<h2 style='display: inline' id='installing-k3s'>Installing k3s</h2><br />
<br />
<h3 style='display: inline' id='generating-k3stoken-and-starting-the-first-k3s-node'>Generating <span class='inlinecode'>K3S_TOKEN</span> and starting the first k3s node</h3><br />
<br />
<span>I generated the k3s token on my Fedora laptop with <span class='inlinecode'>pwgen -n 32</span> and selected one of the results. Then, on all three <span class='inlinecode'>r</span> hosts, I ran the following (replace SECRET_TOKEN with the actual secret):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># echo -n SECRET_TOKEN &gt; ~/.k3s_token</font></i>
</pre>
<br />
<span>The following steps are also documented on the k3s website:</span><br />
<br />
<a class='textlink' href='https://docs.k3s.io/datastore/ha-embedded'>https://docs.k3s.io/datastore/ha-embedded</a><br />
<br />
<span>To bootstrap k3s on the first node, I ran this on <span class='inlinecode'>r0</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i>
        sh -s - server --cluster-init \
        --node-ip=<font color="#000000">192.168</font>.<font color="#000000">2.120</font> \
        --advertise-address=<font color="#000000">192.168</font>.<font color="#000000">2.120</font> \
        --tls-san=r0.wg0.wan.buetow.org
[INFO]  Finding release <b><u><font color="#000000">for</font></u></b> channel stable
[INFO]  Using v1.<font color="#000000">32.6</font>+k3s1 as release
.
.
.
[INFO]  systemd: Starting k3s
</pre>
<br />
<span>Note: The <span class='inlinecode'>--node-ip</span> and <span class='inlinecode'>--advertise-address</span> flags are important to ensure that the embedded etcd cluster communicates over the WireGuard interface (192.168.2.x) rather than the LAN interface (192.168.1.x). This ensures that all control plane traffic is encrypted via WireGuard.</span><br />
<br />
<h3 style='display: inline' id='adding-the-remaining-nodes-to-the-cluster'>Adding the remaining nodes to the cluster</h3><br />
<br />
<span>Then I ran on the other two nodes <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r1 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i>
        sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \
        --node-ip=<font color="#000000">192.168</font>.<font color="#000000">2.121</font> \
        --advertise-address=<font color="#000000">192.168</font>.<font color="#000000">2.121</font> \
        --tls-san=r1.wg0.wan.buetow.org

[root@r2 ~]<i><font color="silver"># curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \</font></i>
        sh -s - server --server https://r<font color="#000000">0</font>.wg0.wan.buetow.org:<font color="#000000">6443</font> \
        --node-ip=<font color="#000000">192.168</font>.<font color="#000000">2.122</font> \
        --advertise-address=<font color="#000000">192.168</font>.<font color="#000000">2.122</font> \
        --tls-san=r2.wg0.wan.buetow.org
.
.
.

</pre>
<br />
<span>Once done, I had a three-node Kubernetes cluster control plane:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># kubectl get nodes</font></i>
NAME                STATUS   ROLES                       AGE     VERSION
r0.lan.buetow.org   Ready    control-plane,etcd,master   4m44s   v1.<font color="#000000">32.6</font>+k3s1
r1.lan.buetow.org   Ready    control-plane,etcd,master   3m13s   v1.<font color="#000000">32.6</font>+k3s1
r2.lan.buetow.org   Ready    control-plane,etcd,master   30s     v1.<font color="#000000">32.6</font>+k3s1

[root@r0 ~]<i><font color="silver"># kubectl get pods --all-namespaces</font></i>
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-5688667fd4-fs2jj                  <font color="#000000">1</font>/<font color="#000000">1</font>     Running     <font color="#000000">0</font>          5m27s
kube-system   helm-install-traefik-crd-f9hgd            <font color="#000000">0</font>/<font color="#000000">1</font>     Completed   <font color="#000000">0</font>          5m27s
kube-system   helm-install-traefik-zqqqk                <font color="#000000">0</font>/<font color="#000000">1</font>     Completed   <font color="#000000">2</font>          5m27s
kube-system   local-path-provisioner-774c6665dc-jqlnc   <font color="#000000">1</font>/<font color="#000000">1</font>     Running     <font color="#000000">0</font>          5m27s
kube-system   metrics-server-6f4c6675d5-5xpmp           <font color="#000000">1</font>/<font color="#000000">1</font>     Running     <font color="#000000">0</font>          5m27s
kube-system   svclb-traefik-411cec5b-cdp2l              <font color="#000000">2</font>/<font color="#000000">2</font>     Running     <font color="#000000">0</font>          78s
kube-system   svclb-traefik-411cec5b-f625r              <font color="#000000">2</font>/<font color="#000000">2</font>     Running     <font color="#000000">0</font>          4m58s
kube-system   svclb-traefik-411cec5b-twrd<font color="#000000">7</font>              <font color="#000000">2</font>/<font color="#000000">2</font>     Running     <font color="#000000">0</font>          4m2s
kube-system   traefik-c98fdf6fb-lt6fx                   <font color="#000000">1</font>/<font color="#000000">1</font>     Running     <font color="#000000">0</font>          4m58s
</pre>
<br />
<span>In order to connect with <span class='inlinecode'>kubectl</span> from my Fedora laptop, I had to copy <span class='inlinecode'>/etc/rancher/k3s/k3s.yaml</span> from <span class='inlinecode'>r0</span> to <span class='inlinecode'>~/.kube/config</span> and then replace the value of the server field with <span class='inlinecode'>r0.lan.buetow.org</span>. kubectl can now manage the cluster. Note that this step has to be repeated when I want to connect to another node of the cluster (e.g. when <span class='inlinecode'>r0</span> is down).</span><br />
<br />
<h2 style='display: inline' id='test-deployments'>Test deployments</h2><br />
<br />
<h3 style='display: inline' id='test-deployment-to-kubernetes'>Test deployment to Kubernetes</h3><br />
<br />
<span>Let&#39;s create a test namespace:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl create namespace <b><u><font color="#000000">test</font></u></b>
namespace/test created

&gt; ~ kubectl get namespaces
NAME              STATUS   AGE
default           Active   6h11m
kube-node-lease   Active   6h11m
kube-public       Active   6h11m
kube-system       Active   6h11m
<b><u><font color="#000000">test</font></u></b>              Active   5s

&gt; ~ kubectl config set-context --current --namespace=<b><u><font color="#000000">test</font></u></b>
Context <font color="#808080">"default"</font> modified.
</pre>
<br />
<span>And let&#39;s also create an Apache test pod:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ cat &lt;&lt;END &gt; apache-deployment.yaml
<i><font color="silver"># Apache HTTP Server Deployment</font></i>
apiVersion: apps/v<font color="#000000">1</font>
kind: Deployment
metadata:
  name: apache-deployment
spec:
  replicas: <font color="#000000">1</font>
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache
        image: httpd:latest
        ports:
        <i><font color="silver"># Container port where Apache listens</font></i>
        - containerPort: <font color="#000000">80</font>
END

&gt; ~ kubectl apply -f apache-deployment.yaml
deployment.apps/apache-deployment created

&gt; ~ kubectl get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/apache-deployment-5fd955856f-4pjmf   <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">0</font>          7s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apache-deployment   <font color="#000000">1</font>/<font color="#000000">1</font>     <font color="#000000">1</font>            <font color="#000000">1</font>           7s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/apache-deployment-5fd955856f   <font color="#000000">1</font>         <font color="#000000">1</font>         <font color="#000000">1</font>       7s
</pre>
<br />
<span>Let&#39;s also create a service: </span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ cat &lt;&lt;END &gt; apache-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: apache
  name: apache-service
spec:
  ports:
    - name: web
      port: <font color="#000000">80</font>
      protocol: TCP
      <i><font color="silver"># Expose port 80 on the service</font></i>
      targetPort: <font color="#000000">80</font>
  selector:
  <i><font color="silver"># Link this service to pods with the label app=apache</font></i>
    app: apache
END

&gt; ~ kubectl apply -f apache-service.yaml
service/apache-service created

&gt; ~ kubectl get service
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
apache-service   ClusterIP   <font color="#000000">10.43</font>.<font color="#000000">249.165</font>   &lt;none&gt;        <font color="#000000">80</font>/TCP    4s
</pre>
<br />
<span>Now let&#39;s create an ingress:</span><br />
<br />
<span class='quote'>Note: I&#39;ve modified the hosts listed in this example after I published this blog post to ensure that there aren&#39;t any bots scraping it.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ cat &lt;&lt;END &gt; apache-ingress.yaml

apiVersion: networking.k8s.io/v<font color="#000000">1</font>
kind: Ingress
metadata:
  name: apache-ingress
  namespace: <b><u><font color="#000000">test</font></u></b>
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
    - host: standby.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
    - host: www.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
END

&gt; ~ kubectl apply -f apache-ingress.yaml
ingress.networking.k8s.io/apache-ingress created

&gt; ~ kubectl describe ingress
Name:             apache-ingress
Labels:           &lt;none&gt;
Namespace:        <b><u><font color="#000000">test</font></u></b>
Address:          <font color="#000000">192.168</font>.<font color="#000000">2.120</font>,<font color="#000000">192.168</font>.<font color="#000000">2.121</font>,<font color="#000000">192.168</font>.<font color="#000000">2.122</font>
Ingress Class:    traefik
Default backend:  &lt;default&gt;
Rules:
  Host                    Path  Backends
  ----                    ----  --------
  f3s.foo.zone
                          /   apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>)
  standby.f3s.foo.zone
                          /   apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>)
  www.f3s.foo.zone
                          /   apache-service:<font color="#000000">80</font> (<font color="#000000">10.42</font>.<font color="#000000">1.11</font>:<font color="#000000">80</font>)
Annotations:              spec.ingressClassName: traefik
                          traefik.ingress.kubernetes.io/router.entrypoints: web
Events:                   &lt;none&gt;
</pre>
<br />
<span>Notes: </span><br />
<br />
<ul>
<li>In the ingress, I use plain HTTP (web) for the Traefik rule, as all the "production" traffic will be routed through a WireGuard tunnel anyway, as I will show later.</li>
</ul><br />
<span>So I tested the Apache web server through the ingress rule:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font>
&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h<font color="#000000">1</font>&gt;&lt;/body&gt;&lt;/html&gt;
</pre>
<br />
<h3 style='display: inline' id='test-deployment-with-persistent-volume-claim'>Test deployment with persistent volume claim</h3><br />
<br />
<span>Next, I modified the Apache example to serve the <span class='inlinecode'>htdocs</span> directory from the NFS share I created in the previous blog post. I used the following manifests. Most of them are the same as before, except for the persistent volume claim and the volume mount in the Apache deployment.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ cat &lt;&lt;END &gt; apache-deployment.yaml
<i><font color="silver"># Apache HTTP Server Deployment</font></i>
apiVersion: apps/v<font color="#000000">1</font>
kind: Deployment
metadata:
  name: apache-deployment
  namespace: <b><u><font color="#000000">test</font></u></b>
spec:
  replicas: <font color="#000000">2</font>
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache
        image: httpd:latest
        ports:
        <i><font color="silver"># Container port where Apache listens</font></i>
        - containerPort: <font color="#000000">80</font>
        readinessProbe:
          httpGet:
            path: /
            port: <font color="#000000">80</font>
          initialDelaySeconds: <font color="#000000">5</font>
          periodSeconds: <font color="#000000">10</font>
        livenessProbe:
          httpGet:
            path: /
            port: <font color="#000000">80</font>
          initialDelaySeconds: <font color="#000000">15</font>
          periodSeconds: <font color="#000000">10</font>
        volumeMounts:
        - name: apache-htdocs
          mountPath: /usr/local/apache<font color="#000000">2</font>/htdocs/
      volumes:
      - name: apache-htdocs
        persistentVolumeClaim:
          claimName: example-apache-pvc
END

&gt; ~ cat &lt;&lt;END &gt; apache-ingress.yaml
apiVersion: networking.k8s.io/v<font color="#000000">1</font>
kind: Ingress
metadata:
  name: apache-ingress
  namespace: <b><u><font color="#000000">test</font></u></b>
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
    - host: standby.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
    - host: www.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: <font color="#000000">80</font>
END

&gt; ~ cat &lt;&lt;END &gt; apache-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-apache-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/nfs/k3svolumes/example-apache-volume-claim
    <b><u><font color="#000000">type</font></u></b>: Directory
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-apache-pvc
  namespace: <b><u><font color="#000000">test</font></u></b>
spec:
  storageClassName: <font color="#808080">""</font>
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
END

&gt; ~ cat &lt;&lt;END &gt; apache-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: apache
  name: apache-service
  namespace: <b><u><font color="#000000">test</font></u></b>
spec:
  ports:
    - name: web
      port: <font color="#000000">80</font>
      protocol: TCP
      <i><font color="silver"># Expose port 80 on the service</font></i>
      targetPort: <font color="#000000">80</font>
  selector:
  <i><font color="silver"># Link this service to pods with the label app=apache</font></i>
    app: apache
END
</pre>
<br />
<span>I applied the manifests:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl apply -f apache-persistent-volume.yaml
&gt; ~ kubectl apply -f apache-service.yaml
&gt; ~ kubectl apply -f apache-deployment.yaml
&gt; ~ kubectl apply -f apache-ingress.yaml
</pre>
<br />
<span>Looking at the deployment, I could see it failed because the directory didn&#39;t exist yet on the NFS share (note that I also increased the replica count to 2 so if one node goes down there&#39;s already a replica running on another node for faster failover):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl get pods
NAME                                 READY   STATUS              RESTARTS   AGE
apache-deployment-5b96bd6b6b-fv2jx   <font color="#000000">0</font>/<font color="#000000">1</font>     ContainerCreating   <font color="#000000">0</font>          9m15s
apache-deployment-5b96bd6b6b-ax2ji   <font color="#000000">0</font>/<font color="#000000">1</font>     ContainerCreating   <font color="#000000">0</font>          9m15s

&gt; ~ kubectl describe pod apache-deployment-5b96bd6b6b-fv2jx | tail -n <font color="#000000">5</font>
Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    9m34s                 default-scheduler  Successfully
    assigned test/apache-deployment-5b96bd6b6b-fv2jx to r2.lan.buetow.org
  Warning  FailedMount  80s (x12 over 9m34s)  kubelet            MountVolume.SetUp
    failed <b><u><font color="#000000">for</font></u></b> volume <font color="#808080">"example-apache-pv"</font> : hostPath <b><u><font color="#000000">type</font></u></b> check failed:
    /data/nfs/k3svolumes/example-apache is not a directory
</pre>
<br />
<span>That&#39;s intentional—I needed to create the directory on the NFS share first, so I did that (e.g. on <span class='inlinecode'>r0</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># mkdir /data/nfs/k3svolumes/example-apache-volume-claim/</font></i>

[root@r0 ~]<i><font color="silver"># cat &lt;&lt;END &gt; /data/nfs/k3svolumes/example-apache-volume-claim/index.html</font></i>
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
  &lt;title&gt;Hello, it works&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;h1&gt;Hello, it works!&lt;/h<font color="#000000">1</font>&gt;
  &lt;p&gt;This site is served via a PVC!&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;
END
</pre>
<br />
<span>The <span class='inlinecode'>index.html</span> file gives us some actual content to serve. After deleting the pod, it recreates itself and the volume mounts correctly:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl delete pod apache-deployment-5b96bd6b6b-fv2jx

&gt; ~ curl -H <font color="#808080">"Host: www.f3s.foo.zone"</font> http://r<font color="#000000">0</font>.lan.buetow.org:<font color="#000000">80</font>
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
  &lt;title&gt;Hello, it works&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;h1&gt;Hello, it works!&lt;/h<font color="#000000">1</font>&gt;
  &lt;p&gt;This site is served via a PVC!&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;
</pre>
<br />
<h3 style='display: inline' id='scaling-traefik-for-faster-failover'>Scaling Traefik for faster failover</h3><br />
<br />
<span>Traefik (used for ingress on k3s) ships with a single replica by default, but for faster failover I bumped it to two replicas so each worker node runs one pod. That way, if a node disappears, the service stays up while Kubernetes schedules a replacement. Here&#39;s the command I used:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl -n kube-system scale deployment traefik --replicas=<font color="#000000">2</font>
</pre>
<br />
<span>And the result:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik
kube-system   traefik-c98fdf6fb-97kqk   <font color="#000000">1</font>/<font color="#000000">1</font>   Running   <font color="#000000">19</font> (53d ago)   64d
kube-system   traefik-c98fdf6fb-9npg2   <font color="#000000">1</font>/<font color="#000000">1</font>   Running   <font color="#000000">11</font> (53d ago)   61d
</pre>
<br />
<h2 style='display: inline' id='make-it-accessible-from-the-public-internet'>Make it accessible from the public internet</h2><br />
<br />
<span>Next, I made this accessible through the public internet via the <span class='inlinecode'>www.f3s.foo.zone</span> hosts. As a reminder from part 1 of this series, I reviewed the section titled "OpenBSD/relayd to the rescue for external connectivity":</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<br />
<span class='quote'>All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I&#39;ve got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let&#39;s Encrypt certificates.</span><br />
<br />
<span class='quote'>All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).</span><br />
<br />
<span class='quote'>So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let&#39;s Encrypt certificate—see my Let&#39;s Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ curl https://f3s.foo.zone
&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h<font color="#000000">1</font>&gt;&lt;/body&gt;&lt;/html&gt;

&gt; ~ curl https://www.f3s.foo.zone
&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h<font color="#000000">1</font>&gt;&lt;/body&gt;&lt;/html&gt;

&gt; ~ curl https://standby.f3s.foo.zone
&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h<font color="#000000">1</font>&gt;&lt;/body&gt;&lt;/html&gt;
</pre>
<br />
<span>This is how it works in <span class='inlinecode'>relayd.conf</span> on OpenBSD:</span><br />
<br />
<h3 style='display: inline' id='openbsd-relayd-configuration'>OpenBSD relayd configuration</h3><br />
<br />
<span>The OpenBSD edge relays keep the Kubernetes-facing addresses for the f3s ingress endpoints in a shared backend table so TLS traffic for every <span class='inlinecode'>f3s</span> hostname lands on the same pool of k3s nodes (pointing to the WireGuard IP addresses of those nodes - remember, they are running locally in my LAN, wheras the OpenBSD edge relays operate in the public internet):</span><br />
<br />
<pre>
table &lt;f3s&gt; {
  192.168.2.120
  192.168.2.121
  192.168.2.122
}
</pre>
<br />
<span>Inside the <span class='inlinecode'>http protocol "https"</span> block each public hostname gets its Let&#39;s Encrypt certificate. The protocol configures TLS keypairs for all f3s services and other public endpoints. For f3s hosts specifically, there are no explicit <span class='inlinecode'>forward to</span> rules in the protocol—they use the relay-level failover mechanism described later. Non-f3s hosts get explicit localhost routing to prevent them from trying the f3s backends:</span><br />
<br />
<pre>
http protocol "https" {
    # TLS certificates for all f3s services
    tls keypair f3s.foo.zone
    tls keypair www.f3s.foo.zone
    tls keypair standby.f3s.foo.zone
    tls keypair anki.f3s.foo.zone
    tls keypair www.anki.f3s.foo.zone
    tls keypair standby.anki.f3s.foo.zone
    tls keypair bag.f3s.foo.zone
    tls keypair www.bag.f3s.foo.zone
    tls keypair standby.bag.f3s.foo.zone
    tls keypair flux.f3s.foo.zone
    tls keypair www.flux.f3s.foo.zone
    tls keypair standby.flux.f3s.foo.zone
    tls keypair audiobookshelf.f3s.foo.zone
    tls keypair www.audiobookshelf.f3s.foo.zone
    tls keypair standby.audiobookshelf.f3s.foo.zone
    tls keypair gpodder.f3s.foo.zone
    tls keypair www.gpodder.f3s.foo.zone
    tls keypair standby.gpodder.f3s.foo.zone
    tls keypair radicale.f3s.foo.zone
    tls keypair www.radicale.f3s.foo.zone
    tls keypair standby.radicale.f3s.foo.zone
    tls keypair vault.f3s.foo.zone
    tls keypair www.vault.f3s.foo.zone
    tls keypair standby.vault.f3s.foo.zone
    tls keypair syncthing.f3s.foo.zone
    tls keypair www.syncthing.f3s.foo.zone
    tls keypair standby.syncthing.f3s.foo.zone
    tls keypair uprecords.f3s.foo.zone
    tls keypair www.uprecords.f3s.foo.zone
    tls keypair standby.uprecords.f3s.foo.zone

    # Explicitly route non-f3s hosts to localhost
    match request header "Host" value "foo.zone" forward to &lt;localhost&gt;
    match request header "Host" value "www.foo.zone" forward to &lt;localhost&gt;
    match request header "Host" value "dtail.dev" forward to &lt;localhost&gt;
    # ... other non-f3s hosts ...

    # NOTE: f3s hosts have NO match rules here!
    # They use relay-level failover (f3s -&gt; localhost backup)
    # See the relay configuration below for automatic failover details
}
</pre>
<br />
<span>Both IPv4 and IPv6 listeners reuse the same protocol definition, making the relay transparent for dual-stack clients while still health checking every k3s backend before forwarding traffic over WireGuard:</span><br />
<br />
<pre>
relay "https4" {
    listen on 46.23.94.99 port 443 tls
    protocol "https"
    # Primary: f3s cluster (with health checks) - Falls back to localhost when all hosts down
    forward to &lt;f3s&gt; port 80 check tcp
    forward to &lt;localhost&gt; port 8080
}

relay "https6" {
    listen on 2a03:6000:6f67:624::99 port 443 tls
    protocol "https"
    # Primary: f3s cluster (with health checks) - Falls back to localhost when all hosts down
    forward to &lt;f3s&gt; port 80 check tcp
    forward to &lt;localhost&gt; port 8080
}
</pre>
<br />
<span>In practice, that means relayd terminates TLS with the correct certificate, keeps the three WireGuard-connected backends in rotation, and ships each request to whichever bhyve VM answers first.</span><br />
<br />
<h3 style='display: inline' id='automatic-failover-when-f3s-cluster-is-down'>Automatic failover when f3s cluster is down</h3><br />
<br />
<span class='quote'>Update: This section was added at Tue 30 Dec 10:11:44 EET 2025</span><br />
<br />
<span>One important aspect of this setup is graceful degradation: when all three f3s nodes are unreachable (e.g., during maintenance or a power outage in my LAN), users should see a friendly status page instead of an error message.</span><br />
<br />
<span>OpenBSD&#39;s relayd supports automatic failover through its health check mechanism. According to the relayd.conf manual:</span><br />
<br />
<span class='quote'>This directive can be specified multiple times - subsequent entries will be used as the backup table if all hosts in the previous table are down.</span><br />
<br />
<span>The key is the order of <span class='inlinecode'>forward to</span> statements in the relay configuration. By placing the f3s table first with <span class='inlinecode'>check tcp</span> health checks, followed by localhost as a backup, relayd automatically routes traffic based on backend availability:</span><br />
<br />
<span>When f3s cluster is UP:</span><br />
<br />
<ul>
<li>Health checks on port 80 succeed for f3s nodes</li>
<li>All f3s traffic routes to the Kubernetes cluster</li>
<li>Localhost backup remains idle</li>
</ul><br />
<span>When f3s cluster is DOWN:</span><br />
<br />
<ul>
<li>All health checks fail (nodes unreachable)</li>
<li>The <span class='inlinecode'>&lt;f3s&gt;</span> table becomes unavailable</li>
<li>Traffic automatically falls back to <span class='inlinecode'>&lt;localhost&gt;</span> on port 8080</li>
<li>OpenBSD&#39;s httpd serves a static fallback page</li>
</ul><br />
<pre>
# NEW configuration - supports automatic failover
http protocol "https" {
    # Explicitly route non-f3s hosts to localhost
    match request header "Host" value "foo.zone" forward to &lt;localhost&gt;
    match request header "Host" value "dtail.dev" forward to &lt;localhost&gt;
    # ... other non-f3s hosts ...

    # f3s hosts have NO protocol rules - they use relay-level failover
    # (no match rules for f3s.foo.zone, anki.f3s.foo.zone, etc.)
}

relay "https4" {
    # f3s FIRST (with health checks), localhost as BACKUP
    forward to &lt;f3s&gt; port 80 check tcp
    forward to &lt;localhost&gt; port 8080
}
</pre>
<br />
<span>This way, f3s traffic uses the relay&#39;s default behavior: try the first table, fall back to the second when health checks fail.</span><br />
<br />
<h3 style='display: inline' id='openbsd-httpd-fallback-configuration'>OpenBSD httpd fallback configuration</h3><br />
<br />
<span>The localhost httpd service on port 8080 serves the fallback content from <span class='inlinecode'>/var/www/htdocs/f3s_fallback/</span>. This directory contains a simple HTML page explaining the situation.</span><br />
<br />
<span>The key configuration detail is using <span class='inlinecode'>request rewrite</span> to ensure the fallback page is served for ALL paths, not just the root. Without this, accessing paths like <span class='inlinecode'>/login?redirect=/files/</span> would return 404 instead of the fallback page:</span><br />
<br />
<pre>
# OpenBSD httpd.conf
# Fallback for f3s hosts - serve fallback page for ALL paths
server "f3s.foo.zone" {
  listen on * port 8080
  log style forwarded
  location * {
    # Rewrite all requests to /index.html to show fallback page regardless of path
    request rewrite "/index.html"
    root "/htdocs/f3s_fallback"
  }
}

server "anki.f3s.foo.zone" {
  listen on * port 8080
  log style forwarded
  location * {
    request rewrite "/index.html"
    root "/htdocs/f3s_fallback"
  }
}

# ... similar blocks for all f3s hostnames ...
</pre>
<br />
<span>The <span class='inlinecode'>request rewrite "/index.html"</span> directive ensures that whether someone accesses <span class='inlinecode'>/</span>, <span class='inlinecode'>/login</span>, <span class='inlinecode'>/api/status</span>, or any other path, they all receive the same fallback page. This prevents confusing 404 errors when users have bookmarked specific URLs or follow deep links while the cluster is down.</span><br />
<br />
<span>The fallback page itself is straightforward:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">&lt;!DOCTYPE</font></u></b> <b><font color="#000000">html</font></b><b><u><font color="#000000">&gt;</font></u></b>
<b><u><font color="#000000">&lt;html&gt;</font></u></b>
<b><u><font color="#000000">&lt;head&gt;</font></u></b>
    <b><u><font color="#000000">&lt;title&gt;</font></u></b>Server turned off<b><u><font color="#000000">&lt;/title&gt;</font></u></b>
    <b><u><font color="#000000">&lt;style&gt;</font></u></b>
        body {
            font-family: <font color="#808080">sans-serif</font>;
            text-align: <font color="#808080">center</font>;
            padding-top: <font color="#808080">50px</font>;
        }
        .container {
            max-width: <font color="#808080">600px</font>;
            margin: <font color="#808080">0</font> <font color="#808080">auto</font>;
        }
    <b><u><font color="#000000">&lt;/style&gt;</font></u></b>
<b><u><font color="#000000">&lt;/head&gt;</font></u></b>
<b><u><font color="#000000">&lt;body&gt;</font></u></b>
    <b><u><font color="#000000">&lt;div</font></u></b> <b><font color="#000000">class</font></b>=<font color="#808080">"container"</font><b><u><font color="#000000">&gt;</font></u></b>
        <b><u><font color="#000000">&lt;h1&gt;</font></u></b>Server turned off<b><u><font color="#000000">&lt;/h1&gt;</font></u></b>
        <b><u><font color="#000000">&lt;p&gt;</font></u></b>The servers are all currently turned off.<b><u><font color="#000000">&lt;/p&gt;</font></u></b>
        <b><u><font color="#000000">&lt;p&gt;</font></u></b>Please try again later.<b><u><font color="#000000">&lt;/p&gt;</font></u></b>
        <b><u><font color="#000000">&lt;p&gt;</font></u></b>Or email <b><u><font color="#000000">&lt;a</font></u></b> <b><font color="#000000">href</font></b>=<font color="#808080">"mailto:paul@nospam.buetow.org"</font><b><u><font color="#000000">&gt;</font></u></b>paul@nospam.buetow.org<b><u><font color="#000000">&lt;/a&gt;</font></u></b>
           - so I can turn them back on for you!<b><u><font color="#000000">&lt;/p&gt;</font></u></b>
    <b><u><font color="#000000">&lt;/div&gt;</font></u></b>
<b><u><font color="#000000">&lt;/body&gt;</font></u></b>
<b><u><font color="#000000">&lt;/html&gt;</font></u></b>
</pre>
<br />
<span>This approach provides several benefits:</span><br />
<br />
<ul>
<li>Automatic detection: Health checks run continuously; no manual intervention needed</li>
<li>Instant fallback: When all f3s nodes go down, the next request automatically routes to localhost</li>
<li>Transparent recovery: When f3s comes back online, health checks pass and traffic resumes automatically</li>
<li>User experience: Visitors see a helpful message instead of connection errors</li>
<li>No DNS changes: The same hostnames work whether f3s is up or down</li>
</ul><br />
<span>This fallback mechanism has proven invaluable during maintenance windows and unexpected outages, ensuring that users always get a response even when the home lab is offline.</span><br />
<br />
<h2 style='display: inline' id='exposing-services-via-lan-ingress'>Exposing services via LAN ingress</h2><br />
<br />
<span>In addition to external access through the OpenBSD relays, services can also be exposed on the local network using LAN-specific ingresses. This is useful for accessing services from within the home network without going through the internet, reducing latency and providing an alternative path if the external relays are unavailable.</span><br />
<br />
<span>The LAN ingress architecture leverages the existing FreeBSD CARP (Common Address Redundancy Protocol) failover infrastructure that&#39;s already in place for NFS-over-TLS (see Part 5). Instead of deploying MetalLB or another LoadBalancer implementation, we reuse the CARP virtual IP (<span class='inlinecode'>192.168.1.138</span>) by adding HTTP/HTTPS forwarding alongside the existing stunnel service on port 2323.</span><br />
<br />
<h3 style='display: inline' id='architecture-overview'>Architecture overview</h3><br />
<br />
<span>The LAN access path differs from external access:</span><br />
<br />
<span>**External access (*.f3s.foo.zone):**</span><br />
<pre>
Internet → OpenBSD relayd (TLS termination, Let&#39;s Encrypt)
        → WireGuard tunnel
        → k3s Traefik :80 (HTTP)
        → Service
</pre>
<br />
<span>**LAN access (*.f3s.lan.foo.zone):**</span><br />
<pre>
LAN → FreeBSD CARP VIP (192.168.1.138)
    → FreeBSD relayd (TCP forwarding)
    → k3s Traefik :443 (TLS termination, cert-manager)
    → Service
</pre>
<br />
<span>The key architectural decisions:</span><br />
<br />
<ul>
<li>FreeBSD <span class='inlinecode'>relayd</span> performs pure TCP forwarding (Layer 4) for ports 80 and 443, not TLS termination</li>
<li>Traefik inside k3s handles TLS offloading using certificates from cert-manager</li>
<li>Self-signed CA for LAN domains (no external dependencies)</li>
<li>CARP provides automatic failover between f0 and f1</li>
<li>No code changes to applications—just add a LAN ingress resource</li>
</ul><br />
<h3 style='display: inline' id='installing-cert-manager'>Installing cert-manager</h3><br />
<br />
<span>First, install cert-manager to handle certificate lifecycle management for LAN services. The installation is automated with a Justfile:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s/cert-manager'>codeberg.org/snonux/conf/f3s/cert-manager</a><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd conf/f3s/cert-manager
$ just install
kubectl apply -f cert-manager.yaml
<i><font color="silver"># ... cert-manager CRDs and resources created ...</font></i>
kubectl apply -f self-signed-issuer.yaml
clusterissuer.cert-manager.io/selfsigned-issuer created
clusterissuer.cert-manager.io/selfsigned-ca-issuer created
kubectl apply -f ca-certificate.yaml
certificate.cert-manager.io/selfsigned-ca created
kubectl apply -f wildcard-certificate.yaml
certificate.cert-manager.io/f3s-lan-wildcard created
</pre>
<br />
<span>This creates:</span><br />
<br />
<ul>
<li>A self-signed ClusterIssuer</li>
<li>A CA certificate (<span class='inlinecode'>f3s-lan-ca</span>) valid for 10 years</li>
<li>A CA-signed ClusterIssuer</li>
<li>A wildcard certificate (<span class='inlinecode'>*.f3s.lan.foo.zone</span>) valid for 90 days with automatic renewal</li>
</ul><br />
<span>Verify the certificates:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get certificate -n cert-manager
NAME               READY   SECRET                 AGE
f3s-lan-wildcard   True    f3s-lan-tls            5m
selfsigned-ca      True    selfsigned-ca-secret   5m
</pre>
<br />
<span>The wildcard certificate (<span class='inlinecode'>f3s-lan-tls</span>) needs to be copied to any namespace that uses it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get secret f3s-lan-tls -n cert-manager -o yaml | \
    sed <font color="#808080">'s/namespace: cert-manager/namespace: services/'</font> | \
    kubectl apply -f -
</pre>
<br />
<h3 style='display: inline' id='configuring-freebsd-relayd-for-lan-access'>Configuring FreeBSD relayd for LAN access</h3><br />
<br />
<span>On both FreeBSD hosts (f0, f1), install and configure <span class='inlinecode'>relayd</span> for TCP forwarding:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install -y relayd
</pre>
<br />
<span>Create <span class='inlinecode'>/usr/local/etc/relayd.conf</span>:</span><br />
<br />
<pre>
# k3s nodes backend table
table &lt;k3s_nodes&gt; { 192.168.1.120 192.168.1.121 192.168.1.122 }

# TCP forwarding to Traefik (no TLS termination)
relay "lan_http" {
    listen on 192.168.1.138 port 80
    forward to &lt;k3s_nodes&gt; port 80 check tcp
}

relay "lan_https" {
    listen on 192.168.1.138 port 443
    forward to &lt;k3s_nodes&gt; port 443 check tcp
}
</pre>
<br />
<span class='quote'>Note: The IP addresses <span class='inlinecode'>192.168.1.120-122</span> are the LAN IPs of the k3s nodes (r0, r1, r2), not their WireGuard IPs. FreeBSD <span class='inlinecode'>relayd</span> requires PF (Packet Filter) to be enabled. Create a minimal <span class='inlinecode'>/etc/pf.conf</span>:</span><br />
<br />
<pre>
# Basic PF rules for relayd
set skip on lo0
pass in quick
pass out quick
</pre>
<br />
<span>Enable PF and relayd:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas sysrc pf_enable=YES pflog_enable=YES relayd_enable=YES
paul@f0:~ % doas service pf start
paul@f0:~ % doas service pflog start
paul@f0:~ % doas service relayd start
</pre>
<br />
<span>Verify <span class='inlinecode'>relayd</span> is listening on the CARP VIP:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas sockstat -<font color="#000000">4</font> -l | grep <font color="#000000">192.168</font>.<font color="#000000">1.138</font>
_relayd  relayd   <font color="#000000">2903</font>  <font color="#000000">11</font>  tcp4   <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">80</font>      *:*
_relayd  relayd   <font color="#000000">2903</font>  <font color="#000000">12</font>  tcp4   <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">443</font>     *:*
</pre>
<br />
<span>Repeat the same configuration on f1. Both hosts will run <span class='inlinecode'>relayd</span> listening on the CARP VIP, but only the CARP MASTER will respond to traffic. When failover occurs, the new MASTER takes over seamlessly.</span><br />
<br />
<h3 style='display: inline' id='adding-lan-ingress-to-services'>Adding LAN ingress to services</h3><br />
<br />
<span>To expose a service on the LAN, add a second Ingress resource to its Helm chart. Here&#39;s an example:</span><br />
<br />
<pre>
---
# LAN Ingress for f3s.lan.foo.zone
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-lan
  namespace: services
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
spec:
  tls:
    - hosts:
        - f3s.lan.foo.zone
      secretName: f3s-lan-tls
  rules:
    - host: f3s.lan.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service
                port:
                  number: 4533
</pre>
<br />
<span>Key points:</span><br />
<br />
<ul>
<li>Use <span class='inlinecode'>web,websecure</span> entrypoints (both HTTP and HTTPS)</li>
<li>Reference the <span class='inlinecode'>f3s-lan-tls</span> secret in the <span class='inlinecode'>tls</span> section</li>
<li>Use <span class='inlinecode'>.f3s.lan.foo.zone</span> subdomain pattern</li>
<li>Same backend service as the external ingress</li>
</ul><br />
<span>Apply the ingress and test:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl apply -f ingress-lan.yaml
ingress.networking.k8s.io/ingress-lan created

$ curl -k https://f3s.lan.foo.zone
HTTP/<font color="#000000">2</font> <font color="#000000">302</font> 
location: /app/
</pre>
<br />
<h3 style='display: inline' id='client-side-dns-and-ca-setup'>Client-side DNS and CA setup</h3><br />
<br />
<span>To access LAN services, clients need DNS entries and must trust the self-signed CA.</span><br />
<br />
<span>Add DNS entries to <span class='inlinecode'>/etc/hosts</span> on your laptop:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ sudo tee -a /etc/hosts &lt;&lt; <font color="#808080">'EOF'</font>
<i><font color="silver"># f3s LAN services</font></i>
<font color="#000000">192.168</font>.<font color="#000000">1.138</font>  f3s.lan.foo.zone
EOF
</pre>
<br />
<span>The CARP VIP <span class='inlinecode'>192.168.1.138</span> provides high availability—traffic automatically fails over to the backup host if the master goes down.</span><br />
<br />
<span>Export the self-signed CA certificate:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get secret selfsigned-ca-secret -n cert-manager -o jsonpath=<font color="#808080">'{.data.ca</font>\.<font color="#808080">crt}'</font> | \
    base64 -d &gt; f3s-lan-ca.crt
</pre>
<br />
<span>Install the CA certificate on Linux (Fedora/Rocky):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ sudo cp f3s-lan-ca.crt /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust
</pre>
<br />
<span>After trusting the CA, browsers will accept the LAN certificates without warnings.</span><br />
<br />
<h3 style='display: inline' id='scaling-to-other-services'>Scaling to other services</h3><br />
<br />
<span>The same pattern can be applied to any service. To add LAN access:</span><br />
<br />
<span>1. Copy the <span class='inlinecode'>f3s-lan-tls</span> secret to the service&#39;s namespace (if not already there)</span><br />
<span>2. Add a LAN Ingress resource using the pattern above</span><br />
<span>3. Configure DNS: <span class='inlinecode'>192.168.1.138 service.f3s.lan.foo.zone</span></span><br />
<br />
<span>No changes needed to:</span><br />
<br />
<ul>
<li>relayd configuration (forwards all traffic)</li>
<li>cert-manager (wildcard cert covers all <span class='inlinecode'>*.f3s.lan.foo.zone</span>)</li>
<li>CARP configuration (VIP shared by all services)</li>
</ul><br />
<h3 style='display: inline' id='tls-offloaders-summary'>TLS offloaders summary</h3><br />
<br />
<span>The f3s infrastructure now has three distinct TLS offloaders:</span><br />
<br />
<ul>
<li>**OpenBSD relayd**: External internet traffic (<span class='inlinecode'>*.f3s.foo.zone</span>) using Let&#39;s Encrypt</li>
<li>**Traefik (k3s)**: LAN HTTPS traffic (<span class='inlinecode'>*.f3s.lan.foo.zone</span>) using cert-manager</li>
<li>**stunnel**: NFS-over-TLS (port 2323) using custom PKI</li>
</ul><br />
<span>Each serves a different purpose with appropriate certificate management for its use case.</span><br />
<br />
<h2 style='display: inline' id='deploying-the-private-docker-image-registry'>Deploying the private Docker image registry</h2><br />
<br />
<span>As not all Docker images I want to deploy are available on public Docker registries and as I also build some of them by myself, there is the need of a private registry.</span><br />
<br />
<span>All manifests for the f3s stack live in my configuration repository:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/conf/src/branch/master/f3s'>codeberg.org/snonux/conf/f3s</a><br />
<br />
<span>Within that repo, the <span class='inlinecode'>f3s/registry/</span> directory contains the Helm chart, a <span class='inlinecode'>Justfile</span>, and a detailed <span class='inlinecode'>README</span>. Here&#39;s the condensed walkthrough I used to roll out the registry with Helm.</span><br />
<br />
<h3 style='display: inline' id='prepare-the-nfs-backed-storage'>Prepare the NFS-backed storage</h3><br />
<br />
<span>Create the directory that will hold the registry blobs on the NFS share (I ran this on <span class='inlinecode'>r0</span>, but any node that exports <span class='inlinecode'>/data/nfs/k3svolumes</span> works):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes/registry</font></i>
</pre>
<br />
<h3 style='display: inline' id='install-or-upgrade-the-chart'>Install (or upgrade) the chart</h3><br />
<br />
<span>Clone the repo (or pull the latest changes) on a workstation that has <span class='inlinecode'>helm</span> configured for the cluster, then deploy the chart. The Justfile wraps the commands, but the raw Helm invocation looks like this:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ git clone https://codeberg.org/snonux/conf/f3s.git
$ cd conf/f3s/examples/conf/f3s/registry
$ helm upgrade --install registry ./helm-chart --namespace infra --create-namespace
</pre>
<br />
<span>Helm creates the <span class='inlinecode'>infra</span> namespace if it does not exist, provisions a <span class='inlinecode'>PersistentVolume</span>/<span class='inlinecode'>PersistentVolumeClaim</span> pair that points at <span class='inlinecode'>/data/nfs/k3svolumes/registry</span>, and spins up a single registry pod exposed via the <span class='inlinecode'>docker-registry-service</span> NodePort (<span class='inlinecode'>30001</span>). Verify everything is up before continuing:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get pods --namespace infra
NAME                               READY   STATUS    RESTARTS      AGE
docker-registry-6bc9bb46bb-6grkr   <font color="#000000">1</font>/<font color="#000000">1</font>     Running   <font color="#000000">6</font> (53d ago)   54d

$ kubectl get svc docker-registry-service -n infra
NAME                      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
docker-registry-service   NodePort   <font color="#000000">10.43</font>.<font color="#000000">141.56</font>   &lt;none&gt;        <font color="#000000">5000</font>:<font color="#000000">30001</font>/TCP   54d
</pre>
<br />
<h3 style='display: inline' id='allow-nodes-and-workstations-to-trust-the-registry'>Allow nodes and workstations to trust the registry</h3><br />
<br />
<span>The registry listens on plain HTTP, so both Docker daemons on workstations and the k3s nodes need to treat it as an insecure registry. That&#39;s fine for my personal needs, as:</span><br />
<br />
<ul>
<li>I don&#39;t store any secrets in the images</li>
<li>I access the registry this way only via my LAN</li>
<li>I may will change it later on...</li>
</ul><br />
<span>On my Fedora workstation where I build images:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cat &lt;&lt;<font color="#808080">"EOF"</font> | sudo tee /etc/docker/daemon.json &gt;/dev/null
{
  <font color="#808080">"insecure-registries"</font>: [
    <font color="#808080">"r0.lan.buetow.org:30001"</font>,
    <font color="#808080">"r1.lan.buetow.org:30001"</font>,
    <font color="#808080">"r2.lan.buetow.org:30001"</font>
  ]
}
EOF
$ sudo systemctl restart docker
</pre>
<br />
<span>On each k3s node, make <span class='inlinecode'>registry.lan.buetow.org</span> resolve locally and point k3s at the NodePort:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b>
&gt;   ssh root@$node <font color="#808080">"echo '127.0.0.1 registry.lan.buetow.org' &gt;&gt; /etc/hosts"</font>
&gt; <b><u><font color="#000000">done</font></u></b>

$ <b><u><font color="#000000">for</font></u></b> node <b><u><font color="#000000">in</font></u></b> r0 r1 r2; <b><u><font color="#000000">do</font></u></b>
&gt; ssh root@$node <font color="#808080">"cat &lt;&lt;'EOF' &gt; /etc/rancher/k3s/registries.yaml</font>
<font color="#808080">mirrors:</font>
<font color="#808080">  "</font>registry.lan.buetow.org:<font color="#000000">30001</font><font color="#808080">":</font>
<font color="#808080">    endpoint:</font>
<font color="#808080">      - "</font>http://localhost:<font color="#000000">30001</font><font color="#808080">"</font>
<font color="#808080">EOF</font>
<font color="#808080">systemctl restart k3s"</font>
&gt; <b><u><font color="#000000">done</font></u></b>
</pre>
<br />
<span>Thanks to the relayd configuration earlier in the post, the external hostnames (<span class='inlinecode'>f3s.foo.zone</span>, etc.) can already reach NodePort <span class='inlinecode'>30001</span>, so publishing the registry later to the outside world is just a matter of wiring the DNS the same way as the ingress hosts. But by default, that&#39;s not enabled for now due to security reasons.</span><br />
<br />
<h3 style='display: inline' id='pushing-and-pulling-images'>Pushing and pulling images</h3><br />
<br />
<span>Tag any locally built image with one of the node IPs on port <span class='inlinecode'>30001</span>, then push it. I usually target whichever node is closest to me, but any of the three will do:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ docker tag my-app:latest r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest
$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/my-app:latest
</pre>
<br />
<span>Inside the cluster (or from other nodes), reference the image via the service name that Helm created:</span><br />
<br />
<pre>
image: docker-registry-service:5000/my-app:latest
</pre>
<br />
<span>You can test the pull path straight away:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl run registry-test \
&gt;   --image=docker-registry-service:<font color="#000000">5000</font>/my-app:latest \
&gt;   --restart=Never -n <b><u><font color="#000000">test</font></u></b> --command -- sleep <font color="#000000">300</font>
</pre>
<br />
<span>If the pod pulls successfully, the private registry is ready for use by the rest of the workloads. Note, that the commands above actually don&#39;t work, they are only for illustration purpose mentioned here.</span><br />
<br />
<h2 style='display: inline' id='example-anki-sync-server-from-the-private-registry'>Example: Anki Sync Server from the private registry</h2><br />
<br />
<span>One of the first workloads I migrated onto the k3s cluster after standing up the registry was my Anki sync server. The configuration repo ships everything in <span class='inlinecode'>examples/conf/f3s/anki-sync-server/</span>: a Docker build context plus a Helm chart that references the freshly built image.</span><br />
<br />
<h3 style='display: inline' id='build-and-push-the-image'>Build and push the image</h3><br />
<br />
<span>The Dockerfile lives under <span class='inlinecode'>docker-image/</span> and takes the Anki release to compile as an <span class='inlinecode'>ANKI_VERSION</span> build argument. The accompanying <span class='inlinecode'>Justfile</span> wraps the steps, but the raw commands look like this:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd conf/f3s/examples/conf/f3s/anki-sync-server/docker-image
$ docker build -t anki-sync-server:<font color="#000000">25.07</font>.5b --build-arg ANKI_VERSION=<font color="#000000">25.07</font>.<font color="#000000">5</font> .
$ docker tag anki-sync-server:<font color="#000000">25.07</font>.5b \
    r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b
$ docker push r0.lan.buetow.org:<font color="#000000">30001</font>/anki-sync-server:<font color="#000000">25.07</font>.5b
</pre>
<br />
<span>Because every k3s node treats <span class='inlinecode'>registry.lan.buetow.org:30001</span> as an insecure mirror (see above), the push succeeds regardless of which node answers. If you prefer the shortcut, <span class='inlinecode'>just f3s</span> in that directory performs the same build/tag/push sequence.</span><br />
<br />
<h3 style='display: inline' id='create-the-anki-secret-and-storage-on-the-cluster'>Create the Anki secret and storage on the cluster</h3><br />
<br />
<span>The Helm chart expects the <span class='inlinecode'>services</span> namespace, a pre-created NFS directory, and a Kubernetes secret that holds the credentials the upstream container understands:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ ssh root@r0 <font color="#808080">"mkdir -p /data/nfs/k3svolumes/anki-sync-server/anki_data"</font>
$ kubectl create namespace services
$ kubectl create secret generic anki-sync-server-secret \
    --from-literal=SYNC_USER1=<font color="#808080">'paul:SECRETPASSWORD'</font> \
    -n services
</pre>
<br />
<span>If the <span class='inlinecode'>services</span> namespace already exists, you can skip that line or let Kubernetes tell you the namespace is unchanged.</span><br />
<br />
<h3 style='display: inline' id='deploy-the-chart'>Deploy the chart</h3><br />
<br />
<span>With the prerequisites in place, install (or upgrade) the chart. It pins the container image to the tag we just pushed and mounts the NFS export via a <span class='inlinecode'>PersistentVolume/PersistentVolumeClaim</span> pair:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd ../helm-chart
$ helm upgrade --install anki-sync-server . -n services
</pre>
<br />
<span>Helm provisions everything referenced in the templates:</span><br />
<br />
<pre>
containers:
- name: anki-sync-server  image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b
  volumeMounts:
  - name: anki-data
    mountPath: /anki_data
</pre>
<br />
<span>Once the release comes up, verify that the pod pulled the freshly pushed image and that the ingress we configured earlier resolves through relayd just like the Apache example.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ kubectl get pods -n services
$ kubectl get ingress anki-sync-server-ingress -n services
$ curl https://anki.f3s.foo.zone/health
</pre>
<br />
<span>All of this runs solely on first-party images that now live in the private registry, proving the full flow from local bild to WireGuard-exposed service.</span><br />
<br />
<h2 style='display: inline' id='nfsv4-uid-mapping-for-postgres-backed-and-other-apps'>NFSv4 UID mapping for Postgres-backed (and other) apps</h2><br />
<br />
<span>NFSv4 only sees numeric user and group IDs, so the <span class='inlinecode'>postgres</span> account created inside the container must exist with the same UID/GID on the Kubernetes worker and on the FreeBSD NFS servers. Otherwise the pod starts with UID 999, the export sees it as an unknown anonymous user, and Postgres fails to initialise its data directory.</span><br />
<br />
<span>To verify things line up end-to-end I run <span class='inlinecode'>id</span> in the container and on the hosts:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; ~ kubectl <b><u><font color="#000000">exec</font></u></b> -n services deploy/miniflux-postgres -- id postgres
uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres)

[root@r0 ~]<i><font color="silver"># id postgres</font></i>
uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">999</font>(postgres) groups=<font color="#000000">999</font>(postgres)

paul@f0:~ % doas id postgres
uid=<font color="#000000">999</font>(postgres) gid=<font color="#000000">99</font>(postgres) groups=<font color="#000000">999</font>(postgres)
</pre>
<br />
<span>The Rocky Linux workers get their matching user with plain <span class='inlinecode'>useradd</span>/<span class='inlinecode'>groupadd</span> (repeat on <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># groupadd --gid 999 postgres</font></i>
[root@r0 ~]<i><font color="silver"># useradd --uid 999 --gid 999 \</font></i>
                --home-dir /var/lib/pgsql \
                --shell /sbin/nologin postgres
</pre>
<br />
<span>FreeBSD uses <span class='inlinecode'>pw</span>, so on each NFS server (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) I created the same account and disabled shell access:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pw groupadd postgres -g <font color="#000000">999</font>
paul@f0:~ % doas pw useradd postgres -u <font color="#000000">999</font> -g postgres \
                -d /var/db/postgres -s /usr/sbin/nologin
</pre>
<br />
<span>Once the UID/GID exist everywhere, the Miniflux chart in <span class='inlinecode'>examples/conf/f3s/miniflux</span> deploys cleanly. The chart provisions both the application and its bundled Postgres database, mounts the exported directory, and builds the DSN at runtime. The important bits live in <span class='inlinecode'>helm-chart/templates/persistent-volumes.yaml</span> and <span class='inlinecode'>deployment.yaml</span>:</span><br />
<br />
<pre>
# Persistent volume lives on the NFS export
hostPath:
  path: /data/nfs/k3svolumes/miniflux/data
  type: Directory
...
containers:
- name: miniflux-postgres
  image: postgres:17
  volumeMounts:
  - name: miniflux-postgres-data
    mountPath: /var/lib/postgresql/data
</pre>
<br />
<span>Follow the <span class='inlinecode'>README</span> beside the chart to create the secrets and the target directory:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>$ cd examples/conf/f3s/miniflux/helm-chart
$ mkdir -p /data/nfs/k3svolumes/miniflux/data
$ kubectl create secret generic miniflux-db-password \
    --from-literal=fluxdb_password=<font color="#808080">'YOUR_PASSWORD'</font> -n services
$ kubectl create secret generic miniflux-admin-password \
    --from-literal=admin_password=<font color="#808080">'YOUR_ADMIN_PASSWORD'</font> -n services
$ helm upgrade --install miniflux . -n services --create-namespace
</pre>
<br />
<span>And to verify it&#39;s all up:</span><br />
<br />
<pre>
$ kubectl get all --namespace=services | grep mini
pod/miniflux-postgres-556444cb8d-xvv2p   1/1     Running   0             54d
pod/miniflux-server-85d7c64664-stmt9     1/1     Running   0             54d
service/miniflux                   ClusterIP   10.43.47.80     &lt;none&gt;        8080/TCP             54d
service/miniflux-postgres          ClusterIP   10.43.139.50    &lt;none&gt;        5432/TCP             54d
deployment.apps/miniflux-postgres   1/1     1            1           54d
deployment.apps/miniflux-server     1/1     1            1           54d
replicaset.apps/miniflux-postgres-556444cb8d   1         1         1       54d
replicaset.apps/miniflux-server-85d7c64664     1         1         1       54d
</pre>
<br />
<span>Or from the repository root I simply run:</span><br />
<br />
<h3 style='display: inline' id='helm-charts-currently-in-service'>Helm charts currently in service</h3><br />
<br />
<span>These are the charts that already live under <span class='inlinecode'>examples/conf/f3s</span> and run on the cluster today (and I&#39;ll keep adding more as new services graduate into production):</span><br />
<br />
<ul>
<li><span class='inlinecode'>anki-sync-server</span> — custom-built image served from the private registry, stores decks on <span class='inlinecode'>/data/nfs/k3svolumes/anki-sync-server/anki_data</span>, and authenticates through the <span class='inlinecode'>anki-sync-server-secret</span>.</li>
<li><span class='inlinecode'>koreade-sync-server</span> — Sync server for KOReader.</li>
<li><span class='inlinecode'>audiobookshelf</span> — media streaming stack with three hostPath mounts (<span class='inlinecode'>config</span>, <span class='inlinecode'>audiobooks</span>, <span class='inlinecode'>podcasts</span>) so the library survives node rebuilds.</li>
<li><span class='inlinecode'>example-apache</span> — minimal HTTP service I use for smoke-testing ingress and relayd rules.</li>
<li><span class='inlinecode'>example-apache-volume-claim</span> — Apache plus PVC variant that exercises NFS-backed storage for walkthroughs like the one earlier in this post.</li>
<li><span class='inlinecode'>miniflux</span> — the Postgres-backed feed reader described above, wired for NFSv4 UID mapping and per-release secrets.</li>
<li><span class='inlinecode'>opodsync</span> — podsync deployment with its data directory under <span class='inlinecode'>/data/nfs/k3svolumes/opodsync/data</span>.</li>
<li><span class='inlinecode'>radicale</span> — CalDAV/CardDAV (and gpodder) backend with separate <span class='inlinecode'>collections</span> and <span class='inlinecode'>auth</span> volumes.</li>
<li><span class='inlinecode'>registry</span> — the plain-HTTP Docker registry exposed on NodePort 30001 and mirrored internally as <span class='inlinecode'>registry.lan.buetow.org:30001</span>.</li>
<li><span class='inlinecode'>syncthing</span> — two-volume setup for config and shared data, fronted by the <span class='inlinecode'>syncthing.f3s.foo.zone</span> ingress.</li>
<li><span class='inlinecode'>wallabag</span> — read-it-later service with persistent <span class='inlinecode'>data</span> and <span class='inlinecode'>images</span> directories on the NFS export.</li>
</ul><br />
<span>I hope you enjoyed this walkthrough. Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Bash Golf Part 4</title>
        <link href="https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html" />
        <id>https://foo.zone/gemfeed/2025-09-14-bash-golf-part-4.html</id>
        <updated>2025-09-13T12:04:03+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the fourth blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time. </summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='bash-golf-part-4'>Bash Golf Part 4</h1><br />
<br />
<span class='quote'>Published at 2025-09-13T12:04:03+03:00</span><br />
<br />
<span>This is the fourth blog post about my Bash Golf series. This series is random Bash tips, tricks, and weirdnesses I have encountered over time. </span><br />
<br />
<a class='textlink' href='./2021-11-29-bash-golf-part-1.html'>2021-11-29 Bash Golf Part 1</a><br />
<a class='textlink' href='./2022-01-01-bash-golf-part-2.html'>2022-01-01 Bash Golf Part 2</a><br />
<a class='textlink' href='./2023-12-10-bash-golf-part-3.html'>2023-12-10 Bash Golf Part 3</a><br />
<a class='textlink' href='./2025-09-14-bash-golf-part-4.html'>2025-09-14 Bash Golf Part 4 (You are currently reading this)</a><br />
<br />
<pre>
    &#39;\       &#39;\        &#39;\        &#39;\                   .  .        |&gt;18&gt;&gt;
      \        \         \         \              .         &#39; .   |
     O&gt;&gt;      O&gt;&gt;       O&gt;&gt;       O&gt;&gt;         .                 &#39;o |
      \       .\. ..    .\. ..    .\. ..   .                      |
      /\    .  /\     .  /\     .  /\    . .                      |
     / /   .  / /  .&#39;.  / /  .&#39;.  / /  .&#39;    .                    |
jgs^^^^^^^`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                        Art by Joan Stark, mod. by Paul Buetow
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#bash-golf-part-4'>Bash Golf Part 4</a></li>
<li>⇢ <a href='#split-pipelines-with-tee--process-substitution'>Split pipelines with tee + process substitution</a></li>
<li>⇢ <a href='#heredocs-for-remote-sessions-and-their-gotchas'>Heredocs for remote sessions (and their gotchas)</a></li>
<li>⇢ <a href='#namespacing-and-dynamic-dispatch-with-'>Namespacing and dynamic dispatch with <span class='inlinecode'>::</span></a></li>
<li>⇢ <a href='#indirect-references-with-namerefs'>Indirect references with namerefs</a></li>
<li>⇢ <a href='#function-declaration-forms'>Function declaration forms</a></li>
<li>⇢ <a href='#chaining-function-calls-in-conditionals'>Chaining function calls in conditionals</a></li>
<li>⇢ <a href='#grep-sed-awk-quickies'>Grep, sed, awk quickies</a></li>
<li>⇢ <a href='#safe-xargs-with-nuls'>Safe xargs with NULs</a></li>
<li>⇢ <a href='#efficient-file-to-variable-and-arrays'>Efficient file-to-variable and arrays</a></li>
<li>⇢ <a href='#quick-password-generator'>Quick password generator</a></li>
<li>⇢ <a href='#yes-for-automation'><span class='inlinecode'>yes</span> for automation</a></li>
<li>⇢ <a href='#forcing-true-to-fail-and-vice-versa'>Forcing <span class='inlinecode'>true</span> to fail (and vice versa)</a></li>
<li>⇢ <a href='#restricted-bash'>Restricted Bash</a></li>
<li>⇢ <a href='#useless-use-of-cat-and-when-its-ok'>Useless use of cat (and when it’s ok)</a></li>
<li>⇢ <a href='#atomic-locking-with-mkdir'>Atomic locking with <span class='inlinecode'>mkdir</span></a></li>
<li>⇢ <a href='#smarter-globs-and-faster-find-exec'>Smarter globs and faster find-exec</a></li>
</ul><br />
<h2 style='display: inline' id='split-pipelines-with-tee--process-substitution'>Split pipelines with tee + process substitution</h2><br />
<br />
<span>Sometimes you want to fan out one stream to multiple consumers and still continue the original pipeline. <span class='inlinecode'>tee</span> plus process substitution does exactly that:</span><br />
<br />
<pre>
somecommand \
    | tee &gt;(command1) &gt;(command2) \
    | command3
</pre>
<br />
<span>All of <span class='inlinecode'>command1</span>, <span class='inlinecode'>command2</span>, and <span class='inlinecode'>command3</span> see the output of <span class='inlinecode'>somecommand</span>. Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">printf</font></u></b> <font color="#808080">'a</font>\n<font color="#808080">b</font>\n<font color="#808080">'</font> \
    | tee &gt;(sed <font color="#808080">'s/.*/X:&amp;/; s/$/ :c1/'</font>) &gt;(tr a-z A-Z | sed <font color="#808080">'s/$/ :c2/'</font>) \
    | sed <font color="#808080">'s/$/ :c3/'</font>
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
a :c3
b :c3
A :c2 :c3
B :c2 :c3
X:a :c1 :c3
X:b :c1 :c3
</pre>
<br />
<span>This relies on Bash process substitution (<span class='inlinecode'>&gt;(...)</span>). Make sure your shell is Bash and not a POSIX <span class='inlinecode'>/bin/sh</span>.</span><br />
<br />
<span>Example (fails under <span class='inlinecode'>dash</span>/POSIX sh):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>/bin/sh -c <font color="#808080">'echo hi | tee &gt;(cat)'</font>
<i><font color="silver"># /bin/sh: 1: Syntax error: "(" unexpected</font></i>
</pre>
<br />
<span>Combine with <span class='inlinecode'>set -o pipefail</span> if failures in side branches should fail the whole pipeline.</span><br />
<br />
<span>Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">set</font></u></b> -o pipefail
<b><u><font color="#000000">printf</font></u></b> <font color="#808080">'ok</font>\n<font color="#808080">'</font> | tee &gt;(<b><u><font color="#000000">false</font></u></b>) | cat &gt;/dev/null
echo $?   <i><font color="silver"># 1 because a side branch failed</font></i>
</pre>
<br />
<span>Further reading:</span><br />
<br />
<a class='textlink' href='https://blogtitle.github.io/splitting-pipelines/'>Splitting pipelines with tee</a><br />
<br />
<h2 style='display: inline' id='heredocs-for-remote-sessions-and-their-gotchas'>Heredocs for remote sessions (and their gotchas)</h2><br />
<br />
<span>Heredocs are great to send multiple commands over SSH in a readable way:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>ssh <font color="#808080">"$SSH_USER@$SSH_HOST"</font> &lt;&lt;EOF
    <i><font color="silver"># Go to the work directory</font></i>
    cd <font color="#808080">"$WORK_DIR"</font>
  
    <i><font color="silver"># Make a git pull</font></i>
    git pull
  
    <i><font color="silver"># Export environment variables required for the service to run</font></i>
    <b><u><font color="#000000">export</font></u></b> AUTH_TOKEN=<font color="#808080">"$APP_AUTH_TOKEN"</font>
  
    <i><font color="silver"># Start the service</font></i>
    docker compose up -d --build
EOF
</pre>
<br />
<span>Tips:</span><br />
<br />
<span>Quoting the delimiter changes interpolation. Use <span class='inlinecode'>&lt;&lt;&#39;EOF&#39;</span> to avoid local expansion and send the content literally.</span><br />
<br />
<span>Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>FOO=bar
cat &lt;&lt;<font color="#808080">'EOF'</font>
$FOO is not expanded here
EOF
</pre>
<br />
<span>Prefer explicit quoting for variables (as above) to avoid surprises. Example (spaces preserved only when quoted):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>WORK_DIR=<font color="#808080">"/tmp/my work"</font>
ssh host &lt;&lt;EOF
    cd $WORK_DIR      <i><font color="silver"># may break if unquoted</font></i>
    cd <font color="#808080">"$WORK_DIR"</font>   <i><font color="silver"># safe</font></i>
EOF
</pre>
<br />
<span>Consider <span class='inlinecode'>set -euo pipefail</span> at the top of the remote block for stricter error handling. Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>ssh host &lt;&lt;<font color="#808080">'EOF'</font>
    <b><u><font color="#000000">set</font></u></b> -euo pipefail
    <b><u><font color="#000000">false</font></u></b>   <i><font color="silver"># causes immediate failure</font></i>
    echo never
EOF
</pre>
<br />
<span>Indent-friendly variant: use a dash to strip leading tabs in the body:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>cat &lt;&lt;-EOF &gt; script.sh
	<i><font color="silver">#!/usr/bin/env bash</font></i>
	echo <font color="#808080">"tab-indented content is dedented"</font>
EOF
</pre>
<br />
<span>Further reading:</span><br />
<br />
<a class='textlink' href='https://rednafi.com/misc/heredoc_headache/'>Heredoc headaches and fixes</a><br />
<br />
<h2 style='display: inline' id='namespacing-and-dynamic-dispatch-with-'>Namespacing and dynamic dispatch with <span class='inlinecode'>::</span></h2><br />
<br />
<span>You can emulate simple namespacing by encoding hierarchy in function names. One neat pattern is pseudo-inheritance via a tiny <span class='inlinecode'>super</span> helper that maps <span class='inlinecode'>pkg::lang::action</span> to a <span class='inlinecode'>pkg::base::action</span> default.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver">#!/usr/bin/env bash</font></i>
<b><u><font color="#000000">set</font></u></b> -euo pipefail

super() {
    <b><u><font color="#000000">local</font></u></b> -r fn=${FUNCNAME[1]}
    <i><font color="silver"># Split name on :: and dispatch to base implementation</font></i>
    <b><u><font color="#000000">local</font></u></b> -a parts=( ${fn//::/ } )
    <font color="#808080">"${parts[0]}::base::${parts[2]}"</font> <font color="#808080">"$@"</font>
}

foo::base::greet() { echo <font color="#808080">"base: $@"</font>; }
foo::german::greet()  { super <font color="#808080">"Guten Tag, $@!"</font>; }
foo::english::greet() { super <font color="#808080">"Good day,  $@!"</font>; }

<b><u><font color="#000000">for</font></u></b> lang <b><u><font color="#000000">in</font></u></b> german english; <b><u><font color="#000000">do</font></u></b>
    foo::$lang::greet Paul
<b><u><font color="#000000">done</font></u></b>
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
base: Guten Tag, Paul!
base: Good day,  Paul!
</pre>
<br />
<h2 style='display: inline' id='indirect-references-with-namerefs'>Indirect references with namerefs</h2><br />
<br />
<span><span class='inlinecode'>declare -n</span> creates a name reference — a variable that points to another variable. It’s cleaner than <span class='inlinecode'>eval</span> for indirection:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>user_name=paul
<b><u><font color="#000000">declare</font></u></b> -n ref=user_name
echo <font color="#808080">"$ref"</font>       <i><font color="silver"># paul</font></i>
ref=julia
echo <font color="#808080">"$user_name"</font> <i><font color="silver"># julia</font></i>
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
paul
julia
</pre>
<br />
<span>Namerefs are local to functions when declared with <span class='inlinecode'>local -n</span>. Requires Bash ≥4.3.</span><br />
<br />
<span>You can also construct the target name dynamically:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>make_var() {
    <b><u><font color="#000000">local</font></u></b> idx=$1; <b><u><font color="#000000">shift</font></u></b>
    <b><u><font color="#000000">local</font></u></b> name=<font color="#808080">"slot_$idx"</font>
    <b><u><font color="#000000">printf</font></u></b> -v <font color="#808080">"$name"</font> <font color="#808080">'%s'</font> <font color="#808080">"$*"</font>   <i><font color="silver"># create variable slot_$idx</font></i>
}

get_var() {
    <b><u><font color="#000000">local</font></u></b> idx=$1
    <b><u><font color="#000000">local</font></u></b> -n ref=<font color="#808080">"slot_$idx"</font>      <i><font color="silver"># bind ref to slot_$idx</font></i>
    <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s</font>\n<font color="#808080">'</font> <font color="#808080">"$ref"</font>
}

make_var <font color="#000000">7</font> <font color="#808080">"seven"</font>
get_var <font color="#000000">7</font>
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
seven
</pre>
<br />
<h2 style='display: inline' id='function-declaration-forms'>Function declaration forms</h2><br />
<br />
<span>All of these work in Bash, but only the first one is POSIX-ish:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>foo() { echo foo; }
function foo { echo foo; }
function foo() { echo foo; }
</pre>
<br />
<span>Recommendation: prefer <span class='inlinecode'>name() { ... }</span> for portability and consistency.</span><br />
<br />
<h2 style='display: inline' id='chaining-function-calls-in-conditionals'>Chaining function calls in conditionals</h2><br />
<br />
<span>Functions return a status like commands. You can short-circuit them in conditionals:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>deploy_check() { <b><u><font color="#000000">test</font></u></b> -f deploy.yaml; }
smoke_test()   { curl -fsS http://localhost/healthz &gt;/dev/null; }

<b><u><font color="#000000">if</font></u></b> deploy_check || smoke_test; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"All good."</font>
<b><u><font color="#000000">else</font></u></b>
    echo <font color="#808080">"Something failed."</font> &gt;&amp;<font color="#000000">2</font>
<b><u><font color="#000000">fi</font></u></b>
</pre>
<br />
<span>You can also compress it golf-style:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>deploy_check || smoke_test &amp;&amp; echo ok || echo fail &gt;&amp;<font color="#000000">2</font>
</pre>
<br />
<h2 style='display: inline' id='grep-sed-awk-quickies'>Grep, sed, awk quickies</h2><br />
<br />
<span>Word match and context: <span class='inlinecode'>grep -w word file</span>; with context: <span class='inlinecode'>grep -C3 foo file</span> (same as <span class='inlinecode'>-A3 -B3</span>). Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>cat &gt; /tmp/ctx.txt &lt;&lt;EOF
one
foo
two
three
bar
EOF
grep -C<font color="#000000">1</font> foo /tmp/ctx.txt
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
one
foo
two
</pre>
<br />
<span>Skip a directory while recursing: <span class='inlinecode'>grep -R --exclude-dir=foo &#39;bar&#39; /path</span>. Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>mkdir -p /tmp/golf/foo /tmp/golf/src
<b><u><font color="#000000">printf</font></u></b> <font color="#808080">'bar</font>\n<font color="#808080">'</font> &gt; /tmp/golf/src/a.txt
<b><u><font color="#000000">printf</font></u></b> <font color="#808080">'bar</font>\n<font color="#808080">'</font> &gt; /tmp/golf/foo/skip.txt
grep -R --exclude-dir=foo <font color="#808080">'bar'</font> /tmp/golf
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
/tmp/golf/src/a.txt:bar
</pre>
<br />
<span>Insert lines with sed: <span class='inlinecode'>sed -e &#39;1isomething&#39; -e &#39;3isomething&#39; file</span>. Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">printf</font></u></b> <font color="#808080">'A</font>\n<font color="#808080">B</font>\n<font color="#808080">C</font>\n<font color="#808080">'</font> &gt; /tmp/s.txt
sed -e <font color="#808080">'1iHEAD'</font> -e <font color="#808080">'3iMID'</font> /tmp/s.txt
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
HEAD
A
B
MID
C
</pre>
<br />
<span>Drop last column with awk: <span class='inlinecode'>awk &#39;NF{NF-=1};1&#39; file</span>. Example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">printf</font></u></b> <font color="#808080">'a b c</font>\n<font color="#808080">x y z</font>\n<font color="#808080">'</font> &gt; /tmp/t.txt
cat /tmp/t.txt
echo
awk <font color="#808080">'NF{NF-=1};1'</font> /tmp/t.txt
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
a b c
x y z

a b
x y
</pre>
<br />
<h2 style='display: inline' id='safe-xargs-with-nuls'>Safe xargs with NULs</h2><br />
<br />
<span>Avoid breaking on spaces/newlines by pairing <span class='inlinecode'>find -print0</span> with <span class='inlinecode'>xargs -0</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>find . -type f -name <font color="#808080">'*.log'</font> -print<font color="#000000">0</font> | xargs -<font color="#000000">0</font> rm -f
</pre>
<br />
<span>Example with spaces and NULs only:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">printf</font></u></b> <font color="#808080">'a</font>\0<font color="#808080">b c</font>\0<font color="#808080">'</font> | xargs -<font color="#000000">0</font> -I{} <b><u><font color="#000000">printf</font></u></b> <font color="#808080">'&lt;%s&gt;</font>\n<font color="#808080">'</font> {}
</pre>
<br />
<span>Output:</span><br />
<span>  </span><br />
<pre>
&lt;a&gt;
&lt;b c&gt;
</pre>
<br />
<h2 style='display: inline' id='efficient-file-to-variable-and-arrays'>Efficient file-to-variable and arrays</h2><br />
<br />
<span>Read a whole file into a variable without spawning <span class='inlinecode'>cat</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>cfg=$(&lt;config.ini)
</pre>
<br />
<span>Read lines into an array safely with <span class='inlinecode'>mapfile</span> (aka <span class='inlinecode'>readarray</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>mapfile -t lines &lt; &lt;(grep -v <font color="#808080">'^#'</font> config.ini)
<b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s</font>\n<font color="#808080">'</font> <font color="#808080">"${lines[@]}"</font>
</pre>
<br />
<span>Assign formatted strings without a subshell using <span class='inlinecode'>printf -v</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">printf</font></u></b> -v msg <font color="#808080">'Hello %s, id=%04d'</font> <font color="#808080">"$USER"</font> <font color="#000000">42</font>
echo <font color="#808080">"$msg"</font>
</pre>
<br />
<span>Output:</span><br />
<br />
<pre>
Hello paul, id=0042
</pre>
<br />
<span>Read NUL-delimited data (pairs well with <span class='inlinecode'>-print0</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>mapfile -d <font color="#808080">''</font> -t files &lt; &lt;(find . -type f -print<font color="#000000">0</font>)
<b><u><font color="#000000">printf</font></u></b> <font color="#808080">'%s</font>\n<font color="#808080">'</font> <font color="#808080">"${files[@]}"</font>
</pre>
<br />
<h2 style='display: inline' id='quick-password-generator'>Quick password generator</h2><br />
<br />
<span>Pure Bash with <span class='inlinecode'>/dev/urandom</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>LC_ALL=C tr -dc <font color="#808080">'A-Za-z0-9_'</font> &lt;/dev/urandom | head -c <font color="#000000">16</font>; echo
</pre>
<br />
<span>Alternative using <span class='inlinecode'>openssl</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>openssl rand -base<font color="#000000">64</font> <font color="#000000">16</font> | tr -d <font color="#808080">'</font>\n<font color="#808080">'</font> | cut -c<font color="#000000">1</font>-<font color="#000000">22</font>
</pre>
<br />
<h2 style='display: inline' id='yes-for-automation'><span class='inlinecode'>yes</span> for automation</h2><br />
<br />
<span><span class='inlinecode'>yes</span> streams a string repeatedly; handy for feeding interactive commands or quick load generation:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>yes | rm -r large_directory        <i><font color="silver"># auto-confirm</font></i>
yes n | dangerous-command          <i><font color="silver"># auto-decline</font></i>
yes anything | head -n<font color="#000000">1</font>            <i><font color="silver"># prints one line: anything</font></i>
</pre>
<br />
<h2 style='display: inline' id='forcing-true-to-fail-and-vice-versa'>Forcing <span class='inlinecode'>true</span> to fail (and vice versa)</h2><br />
<br />
<span>You can shadow builtins with functions:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>true()  { <b><u><font color="#000000">return</font></u></b> <font color="#000000">1</font>; }
false() { <b><u><font color="#000000">return</font></u></b> <font color="#000000">0</font>; }

<b><u><font color="#000000">true</font></u></b>  || echo <font color="#808080">'true failed'</font>
<b><u><font color="#000000">false</font></u></b> &amp;&amp; echo <font color="#808080">'false succeeded'</font>

<i><font color="silver"># Bypass function with builtin/command</font></i>
<b><u><font color="#000000">builtin</font></u></b> <b><u><font color="#000000">true</font></u></b> <i><font color="silver"># returns 0</font></i>
<b><u><font color="#000000">command</font></u></b> <b><u><font color="#000000">true</font></u></b> <i><font color="silver"># returns 0</font></i>
</pre>
<br />
<span>To disable a builtin entirely: <span class='inlinecode'>enable -n true</span> (re-enable with <span class='inlinecode'>enable true</span>).</span><br />
<br />
<span>Further reading:</span><br />
<br />
<a class='textlink' href='https://blog.robertelder.org/force-true-command-to-return-false/'>Force true to return false</a><br />
<br />
<h2 style='display: inline' id='restricted-bash'>Restricted Bash</h2><br />
<br />
<span><span class='inlinecode'>bash -r</span> (or <span class='inlinecode'>rbash</span>) starts a restricted shell that limits potentially dangerous actions, for example:</span><br />
<br />
<ul>
<li>Changing directories (<span class='inlinecode'>cd</span>).</li>
<li>Modifying <span class='inlinecode'>PATH</span>, <span class='inlinecode'>SHELL</span>, <span class='inlinecode'>BASH_ENV</span>, or <span class='inlinecode'>ENV</span>.</li>
<li>Redirecting output.</li>
<li>Running commands with <span class='inlinecode'>/</span> in the name.</li>
<li>Using <span class='inlinecode'>exec</span>.</li>
</ul><br />
<span>It’s a coarse sandbox for highly constrained shells; read <span class='inlinecode'>man bash</span> (RESTRICTED SHELL) for details and caveats.</span><br />
<br />
<span>Example session:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>rbash -c <font color="#808080">'cd /'</font>            <i><font color="silver"># cd: restricted</font></i>
rbash -c <font color="#808080">'PATH=/tmp'</font>       <i><font color="silver"># PATH: restricted</font></i>
rbash -c <font color="#808080">'echo hi &gt; out'</font>   <i><font color="silver"># redirection: restricted</font></i>
rbash -c <font color="#808080">'/bin/echo hi'</font>    <i><font color="silver"># commands with /: restricted</font></i>
rbash -c <font color="#808080">'exec ls'</font>         <i><font color="silver"># exec: restricted</font></i>
</pre>
<br />
<h2 style='display: inline' id='useless-use-of-cat-and-when-its-ok'>Useless use of cat (and when it’s ok)</h2><br />
<br />
<span>Avoid the extra process if a command already reads files or <span class='inlinecode'>STDIN</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Prefer</font></i>
grep -i foo file
&lt;file grep -i foo        <i><font color="silver"># or feed via redirection</font></i>

<i><font color="silver"># Over</font></i>
cat file | grep -i foo
</pre>
<br />
<span>But for interactive composition, or when you truly need to concatenate multiple sources into a single stream, <span class='inlinecode'>cat</span> is fine, as you may think, "First I need the content, then I do X." Changing the "useless use of cat" in retrospect is really a waste of time for one-time interactive use:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>cat file1 file2 | grep -i foo
</pre>
<br />
<span>From notes: “Good for interactivity; Useless use of cat” — use judgment.</span><br />
<br />
<h2 style='display: inline' id='atomic-locking-with-mkdir'>Atomic locking with <span class='inlinecode'>mkdir</span></h2><br />
<br />
<span>Portable advisory locks can be emulated with <span class='inlinecode'>mkdir</span> because it’s atomic:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>lockdir=/tmp/myjob.lock
<b><u><font color="#000000">if</font></u></b> mkdir <font color="#808080">"$lockdir"</font> <font color="#000000">2</font>&gt;/dev/null; <b><u><font color="#000000">then</font></u></b>
    <b><u><font color="#000000">trap</font></u></b> <font color="#808080">'rmdir "$lockdir"'</font> EXIT INT TERM
    <i><font color="silver"># critical section</font></i>
    do_work
<b><u><font color="#000000">else</font></u></b>
    echo <font color="#808080">"Another instance is running"</font> &gt;&amp;<font color="#000000">2</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
<b><u><font color="#000000">fi</font></u></b>
</pre>
<br />
<span>This works well on Linux. Remove the lock in <span class='inlinecode'>trap</span> so crashes don’t leave stale locks.</span><br />
<br />
<h2 style='display: inline' id='smarter-globs-and-faster-find-exec'>Smarter globs and faster find-exec</h2><br />
<br />
<ul>
<li>Enable extended globs when useful: <span class='inlinecode'>shopt -s extglob</span>; then patterns like <span class='inlinecode'>!(tmp|cache)</span> work.</li>
<li>Use <span class='inlinecode'>-exec ... {} +</span> to batch many paths in fewer process invocations:</li>
</ul><br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>find . -name <font color="#808080">'*.log'</font> -exec gzip -<font color="#000000">9</font> {} +
</pre>
<br />
<span>Example for extglob (exclude two dirs from listing):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">shopt</font></u></b> -s extglob
ls -d -- !(.git|node_modules) <font color="#000000">2</font>&gt;/dev/null
</pre>
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2025-09-14-bash-golf-part-4.html'>2025-09-14 Bash Golf Part 4 (You are currently reading this)</a><br />
<a class='textlink' href='./2023-12-10-bash-golf-part-3.html'>2023-12-10 Bash Golf Part 3</a><br />
<a class='textlink' href='./2022-01-01-bash-golf-part-2.html'>2022-01-01 Bash Golf Part 2</a><br />
<a class='textlink' href='./2021-11-29-bash-golf-part-1.html'>2021-11-29 Bash Golf Part 1</a><br />
<a class='textlink' href='./2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html'>2021-06-05 Gemtexter - One Bash script to rule it all</a><br />
<a class='textlink' href='./2021-05-16-personal-bash-coding-style-guide.html'>2021-05-16 Personal Bash coding style guide</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Random Weird Things - Part Ⅲ</title>
        <link href="https://foo.zone/gemfeed/2025-08-15-random-weird-things-iii.html" />
        <id>https://foo.zone/gemfeed/2025-08-15-random-weird-things-iii.html</id>
        <updated>2025-08-14T23:21:32+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Every so often, I come across random, weird, and unexpected things on the internet. It would be neat to share them here from time to time. This is the third run.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='random-weird-things---part-'>Random Weird Things - Part Ⅲ</h1><br />
<br />
<span class='quote'>Published at 2025-08-14T23:21:32+03:00</span><br />
<br />
<span>Every so often, I come across random, weird, and unexpected things on the internet. It would be neat to share them here from time to time. This is the third run.</span><br />
<br />
<a class='textlink' href='./2024-07-05-random-weird-things.html'>2024-07-05 Random Weird Things - Part Ⅰ</a><br />
<a class='textlink' href='./2025-02-08-random-weird-things-ii.html'>2025-02-08 Random Weird Things - Part Ⅱ</a><br />
<a class='textlink' href='./2025-08-15-random-weird-things-iii.html'>2025-08-15 Random Weird Things - Part Ⅲ (You are currently reading this)</a><br />
<br />
<pre>
 /\_/\        /\_/\        /\_/\
( o.o ) WHOA!( o.o ) WHOA!( o.o )
 &gt; ^ &lt;        &gt; ^ &lt;        &gt; ^ &lt;
 /   \  MEOW! /   \  MOEEW!/   \
/_____\      /_____\      /_____\
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#random-weird-things---part-'>Random Weird Things - Part Ⅲ</a></li>
<li>⇢ <a href='#21-doom-in-typescripts-type-system'>21. Doom in TypeScript’s type system</a></li>
<li>⇢ <a href='#run-it-in-a-pdf'>Run it in a PDF</a></li>
<li>⇢ ⇢ <a href='#22-doom-inside-a-pdf'>22. Doom inside a PDF</a></li>
<li>⇢ ⇢ <a href='#23-linux-inside-a-pdf'>23. Linux inside a PDF</a></li>
<li>⇢ <a href='#24-sqlite-loves-tcl'>24. SQLite loves Tcl</a></li>
<li>⇢ <a href='#25-fossil-e-and-a-tcltk-chat'>25. Fossil, “e”, and a Tcl/Tk chat</a></li>
<li>⇢ <a href='#26-kubernetes-from-an-excel-spreadsheet'>26. Kubernetes from an Excel spreadsheet</a></li>
<li>⇢ <a href='#27-sre-means-sorry'>27. SRE means “Sorry…”</a></li>
<li>⇢ <a href='#28-touch-grass-the-app'>28. Touch Grass, the app</a></li>
<li>⇢ <a href='#29-blogging-with-the-c-preprocessor'>29. Blogging with the C preprocessor</a></li>
<li>⇢ <a href='#30-accidentally-turing-complete'>30. Accidentally Turing-complete</a></li>
</ul><br />
<h2 style='display: inline' id='21-doom-in-typescripts-type-system'>21. Doom in TypeScript’s type system</h2><br />
<br />
<span>Yes, really. Someone has implemented Doom to run within the TypeScript type system—compile-time madness, but fun to watch.</span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=0mCsluv5FXA'>Doom in the TS type system</a><br />
<br />
<span>TypeScript’s type checker is surprisingly expressive: conditional types, recursion, and template literal types let you encode nontrivial logic that “executes” during compilation. The demo exploits this to build a tiny ray-caster that renders as compiler errors or types. It’s wildly impractical, but a great reminder that enough expressiveness plus recursion tends to drift toward Turing completeness.</span><br />
<br />
<h2 style='display: inline' id='run-it-in-a-pdf'>Run it in a PDF</h2><br />
<br />
<h3 style='display: inline' id='22-doom-inside-a-pdf'>22. Doom inside a PDF</h3><br />
<br />
<span>Running Doom embedded in a PDF file. No separate binary—just a cursed document.</span><br />
<br />
<a class='textlink' href='https://github.com/ading2210/doompdf'>doompdf</a><br />
<br />
<span>This relies on features like PDF JavaScript and interactive objects, which some viewers still support. Expect mixed results: many modern readers sandbox or disable scripting by default for security. If you try it, use a compatible desktop viewer and be prepared for portability quirks.</span><br />
<br />
<h3 style='display: inline' id='23-linux-inside-a-pdf'>23. Linux inside a PDF</h3><br />
<br />
<span>Boot a tiny Linux inside a PDF. This rabbit hole goes deep.</span><br />
<br />
<a class='textlink' href='https://github.com/ading2210/linuxpdf'>linuxpdf</a><br />
<br />
<span>Like the Doom-in-PDF trick, this leans on the PDF runtime to host unconventional logic and rendering. It’s more of an art piece than a daily driver, but it shows how “document” formats can accidentally become platforms. The security posture of PDF viewers varies significantly, so expect inconsistent behaviour across different apps.</span><br />
<br />
<h2 style='display: inline' id='24-sqlite-loves-tcl'>24. SQLite loves Tcl</h2><br />
<br />
<span>SQLite was initially designed as a Tcl extension and still relies heavily on Tcl today: the amalgamated C source is generated by <span class='inlinecode'>mksqlite3c.tcl</span>, tests are written in Tcl, and even the documentation is built with it.</span><br />
<br />
<a class='textlink' href='https://www.tcl-lang.org/community/tcl2017/assets/talk93/Paper.html'>Tcl 2017 paper</a><br />
<br />
<span>The famous single-file <span class='inlinecode'>sqlite3.c</span> is not hand-edited—developers maintain sources, plus build scripts that knit everything together deterministically. Their Tcl-centric tooling provides them with reproducible builds and a very opinionated workflow. It’s a great counterexample to the idea that “serious” projects must standardise on the most popular build stacks.</span><br />
<br />
<h2 style='display: inline' id='25-fossil-e-and-a-tcltk-chat'>25. Fossil, “e”, and a Tcl/Tk chat</h2><br />
<br />
<span>The SQLite folks use a custom Tcl/Tk editor called “e”, a homegrown VCS (Fossil), and even a Tcl/Tk chat room for development—peak bespoke tooling.</span><br />
<br />
<a class='textlink' href='https://www.tcl-lang.org/community/tcl2017/assets/talk93/Paper.html'>More details in the paper</a><br />
<br />
<span>Fossil bundles source control, tickets, wiki, and a web UI into a single portable binary—no external services required. The “e” editor and chat complete a tight, integrated loop tailored to their team’s needs and constraints. It’s delightfully “boring tech” that has produced one of the most reliable databases on earth.</span><br />
<br />
<h2 style='display: inline' id='26-kubernetes-from-an-excel-spreadsheet'>26. Kubernetes from an Excel spreadsheet</h2><br />
<br />
<span>Drive <span class='inlinecode'>kubectl</span> from an <span class='inlinecode'>.xlsx</span> file because clusters belong in spreadsheets, apparently.</span><br />
<br />
<a class='textlink' href='https://github.com/learnk8s/xlskubectl'>xlskubectl</a><br />
<br />
<span>Resources are rows; columns map to fields; the tool renders YAML and applies it for you. It’s oddly ergonomic for demos, audits, or letting non‑YAML‑native teammates propose changes. Obviously, be careful—permissions and review gates still matter even if your “IDE” is Excel.</span><br />
<br />
<h2 style='display: inline' id='27-sre-means-sorry'>27. SRE means “Sorry…”</h2><br />
<br />
<span>An industry joke (or truth?) that SRE (short for Site Reliability Engineer) stands for “Sorry…”. </span><br />
<br />
<span>Anecdotes are a good reminder that failure is inevitable and empathy is essential. The best takeaways are about clear communication, graceful degradation, and blameless postmortems. Laughing helps, but guardrails and good on‑call hygiene help even more.</span><br />
<br />
<h2 style='display: inline' id='28-touch-grass-the-app'>28. Touch Grass, the app</h2><br />
<br />
<span>When screens consume too much, this site/app nudges you to go outside.</span><br />
<br />
<a class='textlink' href='https://touchgrass.now/'>Touch grass</a><br />
<br />
<span>It’s simple and playful—sometimes that’s the nudge you need to break doomscroll loops. Treat it like a micro‑ritual: set a reminder, step outside, reset. Your eyes (and nervous system) will thank you.</span><br />
<br />
<h2 style='display: inline' id='29-blogging-with-the-c-preprocessor'>29. Blogging with the C preprocessor</h2><br />
<br />
<span>Use the C preprocessor to assemble a blog. It shouldn’t work this well—and yet.</span><br />
<br />
<a class='textlink' href='https://wheybags.com/blog/macroblog.html'>Macroblog with cpp</a><br />
<br />
<span>Posts are stitched together with <span class='inlinecode'>#include</span>s and macros, giving you DRY content blocks and repeatable builds. It’s hacky, fast, and delightfully text‑only—perfect for people who think makefiles are a UI. Would I recommend it for everyone? No. Is it charming and effective? Absolutely.</span><br />
<br />
<h2 style='display: inline' id='30-accidentally-turing-complete'>30. Accidentally Turing-complete</h2><br />
<br />
<span>A delightful catalogue of systems that unintentionally become Turing-complete.</span><br />
<br />
<a class='textlink' href='https://beza1e1.tuxen.de/articles/accidentally_turing_complete.html'>Accidentally Turing-complete</a><br />
<br />
<span>Give a system conditionals, state, and unbounded composition, and it often crosses the threshold into general computation—whether that was the goal or not. The list includes items such as CSS, regular expression dialects, and even card games. It’s a fun lens for understanding why “just a configuration language” can get complicated fast.</span><br />
<br />
<span>I hope you had some fun. E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Local LLM for Coding with Ollama on macOS</title>
        <link href="https://foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.html" />
        <id>https://foo.zone/gemfeed/2025-08-05-local-coding-llm-with-ollama.html</id>
        <updated>2025-08-04T16:43:39+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama. </summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='local-llm-for-coding-with-ollama-on-macos'>Local LLM for Coding with Ollama on macOS</h1><br />
<br />
<span class='quote'>Published at 2025-08-04T16:43:39+03:00</span><br />
<br />
<pre>
      [::]
     _|  |_
   /  o  o  \                       |
  |    ∆    |  &lt;-- Ollama          / \
  |  \___/  |                     /   \
   \_______/             LLM --&gt; / 30B \
    |     |                     / Qwen3 \
   /|     |\                   /  Coder  \
  /_|     |_\_________________/ quantised \
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#local-llm-for-coding-with-ollama-on-macos'>Local LLM for Coding with Ollama on macOS</a></li>
<li>⇢ <a href='#why-local-llms'>Why Local LLMs?</a></li>
<li>⇢ <a href='#hardware-considerations'>Hardware Considerations</a></li>
<li>⇢ <a href='#basic-setup-and-manual-code-prompting'>Basic Setup and Manual Code Prompting</a></li>
<li>⇢ ⇢ <a href='#installing-ollama-and-a-model'>Installing Ollama and a Model</a></li>
<li>⇢ ⇢ <a href='#example-usage'>Example Usage</a></li>
<li>⇢ <a href='#agentic-coding-with-aider'>Agentic Coding with Aider</a></li>
<li>⇢ ⇢ <a href='#installation'>Installation</a></li>
<li>⇢ ⇢ <a href='#agentic-coding-prompt'>Agentic coding prompt</a></li>
<li>⇢ ⇢ <a href='#compilation--execution'>Compilation &amp; Execution</a></li>
<li>⇢ ⇢ <a href='#the-code'>The code</a></li>
<li>⇢ <a href='#in-editor-code-completion'>In-Editor Code Completion</a></li>
<li>⇢ ⇢ <a href='#installation-of-lsp-ai'>Installation of <span class='inlinecode'>lsp-ai</span></a></li>
<li>⇢ ⇢ <a href='#helix-configuration'>Helix Configuration</a></li>
<li>⇢ ⇢ <a href='#code-completion-in-action'>Code completion in action</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<span>With all the AI buzz around coding assistants, and being a bit concerned about being dependent on third-party cloud providers here, I decided to explore the capabilities of local large language models (LLMs) using Ollama. </span><br />
<br />
<span>Ollama is a powerful tool that brings local AI capabilities directly to your local hardware. By running AI models locally, you can enjoy the benefits of intelligent assistance without relying on cloud services. This document outlines my initial setup and experiences with Ollama, with a focus on coding tasks and agentic coding.</span><br />
<br />
<a class='textlink' href='https://ollama.com/'>https://ollama.com/</a><br />
<br />
<h2 style='display: inline' id='why-local-llms'>Why Local LLMs?</h2><br />
<br />
<span>Using local AI models through Ollama offers several advantages:</span><br />
<br />
<ul>
<li>Data Privacy: Keep your code and data completely private by processing everything locally.</li>
<li>Cost-Effective: Reduce reliance on expensive cloud API calls.</li>
<li>Reliability: Works seamlessly even with spotty internet or offline.</li>
<li>Speed: Avoid network latency and enjoy instant responses while coding. Although I mostly found Ollama slower than commercial LLM providers. However, that may change with the evolution of models and hardware.</li>
</ul><br />
<h2 style='display: inline' id='hardware-considerations'>Hardware Considerations</h2><br />
<br />
<span>Running large language models locally is currently limited by consumer hardware capabilities:</span><br />
<br />
<ul>
<li>GPU Memory: Most consumer-grade GPUs (even in 2025) top out at 16–24GB of VRAM, making it challenging to run larger models like the 30B (30 billion) parameter LLMs (they go up to the 100 billion and more).</li>
<li>RAM Constraints: On my MacBook Pro with M3 CPU and 36GB RAM, I chose a 14B model (<span class='inlinecode'>qwen2.5-coder:14b-instruct</span>) as it represents a practical balance between capability and resource requirements.</li>
</ul><br />
<span>For reference, here are some key points about running large LLMs locally:</span><br />
<br />
<ul>
<li>Models larger than 30B: I don&#39;t even think about running them locally. One (e.g. from Qwen, Deepseek or Kimi K2) with several hundred billion parameters could match the "performance" of commercial LLMs (Claude Sonnet 4, etc). Still, for personal use, the hardware demands are just too high (or temporarily "rent" it via the public cloud?).</li>
<li>30B models: Require at least 48GB of GPU VRAM for full inference without quantisation. Currently only feasible on high-end professional GPUs (or an Apple-silicone Mac with enough unified RAM).</li>
<li>14B models: Can run with 16-24GB GPU memory (VRAM), suitable for consumer-grade hardware (or use a quantised larger model)</li>
<li>7B-13B models: Best fit for mainstream consumer hardware, requiring minimal VRAM and running smoothly on mid-range GPUs, but with limited capabilities compared to larger models and more hallucinations.</li>
</ul><br />
<span>The model I&#39;ll be mainly using in this blog post (<span class='inlinecode'>qwen2.5-coder:14b-instruct</span>) is particularly interesting as:</span><br />
<br />
<ul>
<li><span class='inlinecode'>instruct</span>: Indicates this is the instruction-tuned variant, optimised for diverse tasks including coding</li>
<li><span class='inlinecode'>coder</span>: Tells me that this model was trained on a mix of code and text data, making it especially effective for programming assistance</li>
</ul><br />
<a class='textlink' href='https://ollama.com/library/qwen2.5-coder'>https://ollama.com/library/qwen2.5-coder</a><br />
<a class='textlink' href='https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct'>https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct</a><br />
<br />
<span>For general thinking tasks, I found <span class='inlinecode'>deepseek-r1:14b</span> to be useful (in the future, I also want to try other <span class='inlinecode'>qwen</span> models here). For instance, I utilised <span class='inlinecode'>deepseek-r1:14b</span> to format this blog post and correct some English errors, demonstrating its effectiveness in natural language processing tasks. Additionally, it has proven invaluable for adding context and enhancing clarity in technical explanations, all while running locally on the MacBook Pro. Admittedly, it was a lot slower than "just using ChatGPT", but still within a minute or so. </span><br />
<br />
<a class='textlink' href='https://ollama.com/library/deepseek-r1:14b'>https://ollama.com/library/deepseek-r1:14b</a><br />
<a class='textlink' href='https://huggingface.co/deepseek-ai/DeepSeek-R1'>https://huggingface.co/deepseek-ai/DeepSeek-R1</a><br />
<br />
<span>A quantised (as mentioned above) LLM which has been converted from high-precision connection (typically 16- or 32-bit floating point) representations to lower-precision formats, such as 8-bit integers. This reduces the overall memory footprint of the model, making it significantly smaller and enabling it to run more efficiently on hardware with limited resources or to allow higher throughput on GPUs and CPUs. The benefits of quantisation include reduced storage and faster inference times due to simpler computations and better memory bandwidth utilisation. However, quantisation can introduce a drop in model accuracy because the lower numerical precision means the model cannot represent parameter values as precisely. In some cases, it may lead to instability or unexpected outputs in specific tasks or edge cases.</span><br />
<br />
<h2 style='display: inline' id='basic-setup-and-manual-code-prompting'>Basic Setup and Manual Code Prompting</h2><br />
<br />
<h3 style='display: inline' id='installing-ollama-and-a-model'>Installing Ollama and a Model</h3><br />
<br />
<span>To install Ollama, performed these steps (this assumes that you have already installed Homebrew on your macOS system):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>brew install ollama
rehash
ollama serve
</pre>
<br />
<span>Which started up the Ollama server with something like this (the screenshots shows already some requests made):</span><br />
<br />
<a href='./local-coding-LLM-with-ollama/ollama-serve.png'><img alt='Ollama serving' title='Ollama serving' src='./local-coding-LLM-with-ollama/ollama-serve.png' /></a><br />
<br />
<span>And then, in a new terminal, I pulled the model with:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>ollama pull qwen2.<font color="#000000">5</font>-coder:14b-instruct
</pre>
<br />
<span>Now, I was ready to go! It wasn&#39;t so difficult. Now, let&#39;s see how I used this model for coding tasks.</span><br />
<br />
<h3 style='display: inline' id='example-usage'>Example Usage</h3><br />
<br />
<span>I run the following command to get a Go function for calculating Fibonacci numbers:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>time echo <font color="#808080">"Write a function in golang to print out the Nth fibonacci number, \</font>
<font color="#808080">  only the function without the boilerplate"</font> | ollama run qwen2.<font color="#000000">5</font>-coder:14b-instruct

Output:

func fibonacci(n int) int {
    <b><u><font color="#000000">if</font></u></b> n &lt;= <font color="#000000">1</font> {
        <b><u><font color="#000000">return</font></u></b> n
    }
    a, b := <font color="#000000">0</font>, <font color="#000000">1</font>
    <b><u><font color="#000000">for</font></u></b> i := <font color="#000000">2</font>; i &lt;= n; i++ {
        a, b = b, a+b
    }
    <b><u><font color="#000000">return</font></u></b> b
}

Execution Metrics:

Executed <b><u><font color="#000000">in</font></u></b>    <font color="#000000">4.90</font> secs      fish           external
   usr time   <font color="#000000">15.54</font> millis    <font color="#000000">0.31</font> millis   <font color="#000000">15.24</font> millis
   sys time   <font color="#000000">19.68</font> millis    <font color="#000000">1.02</font> millis   <font color="#000000">18.66</font> millis
</pre>
<br />
<span class='quote'>Note, after having written this blog post, I tried the same with the newer model <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (which "just" came out, and it&#39;s a quantised 30B model), and it was much faster:</span><br />
<br />
<pre>
Executed in    1.83 secs      fish           external
   usr time   17.82 millis    4.40 millis   13.42 millis
   sys time   17.07 millis    1.57 millis   15.50 millis
</pre>
<br />
<a class='textlink' href='https://ollama.com/library/qwen3-coder:30b-a3b-q4_K_M'>https://ollama.com/library/qwen3-coder:30b-a3b-q4_K_M</a><br />
<br />
<h2 style='display: inline' id='agentic-coding-with-aider'>Agentic Coding with Aider</h2><br />
<br />
<h3 style='display: inline' id='installation'>Installation</h3><br />
<br />
<span>Aider is a tool that enables agentic coding by leveraging AI models (also local ones, as in our case). While setting up OpenAI Codex and OpenCode with Ollama proved challenging (those tools either didn&#39;t know how to work with the "tools" (the capability to execute external commands or to edit files for example) or didn&#39;t connect at all to Ollama for some reason), Aider worked smoothly.</span><br />
<br />
<span>To get started, the only thing I had to do was to install it via Homebrew, initialise a Git repository, and then start Aider with the Ollama model <span class='inlinecode'>ollama_chat/qwen2.5-coder:14b-instruct</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>brew install aider
mkdir -p ~/git/aitest &amp;&amp; cd ~/git/aitest &amp;&amp; git init
aider --model ollama_chat/qwen<font color="#000000">2.5</font>-coder:14b-instruct
</pre>
<br />
<a class='textlink' href='https://aider.chat'>https://aider.chat</a><br />
<a class='textlink' href='https://opencode.ai'>https://opencode.ai</a><br />
<a class='textlink' href='https://github.com/openai/codex'>https://github.com/openai/codex</a><br />
<br />
<h3 style='display: inline' id='agentic-coding-prompt'>Agentic coding prompt</h3><br />
<br />
<span>This is the prompt I gave:</span><br />
<br />
<pre>
Create a Go project with these files:

* `cmd/aitest/main.go`: CLI entry point
* `internal/version.go`: Version information (0.0.0), should be printed when the
   program was started with `-version` flag
* `internal/count.go`: File counting functionality, the program should print out
   the number of files in a given subdirectory (the directory is provided as a
   command line flag with `-dir`), if none flag is given, no counting should be
   done
* `README.md`: Installation and usage instructions
</pre>
<br />
<span>It then generated something, but did not work out of the box, as it had some issues with the imports and package names. So I had to do some follow-up prompts to fix those issues with something like this:</span><br />
<br />
<pre>
* Update import paths to match module name, github.com/yourname/aitest should be
  aitest in main.go
* The package names of internal/count.go and internal/version.go should be
  internal, and not count and version.
</pre>
<br />
<a href='./local-coding-LLM-with-ollama/aider-fix-package.png'><img alt='Aider fixing the packages' title='Aider fixing the packages' src='./local-coding-LLM-with-ollama/aider-fix-package.png' /></a><br />
<br />
<h3 style='display: inline' id='compilation--execution'>Compilation &amp; Execution</h3><br />
<br />
<span>Once done so, the project was ready and I could compile and run it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>go build cmd/aitest/main.go
./main -v
<font color="#000000">0.0</font>.<font color="#000000">0</font>
./main -dir .
Number of files <b><u><font color="#000000">in</font></u></b> directory .: <font color="#000000">4</font>
</pre>
<br />
<h3 style='display: inline' id='the-code'>The code</h3><br />
<br />
<span>The code it generated was simple, but functional. The <span class='inlinecode'>./cmd/aitest/main.go</span> file:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> main

<b><u><font color="#000000">import</font></u></b> (
	<font color="#808080">"flag"</font>
	<font color="#808080">"fmt"</font>
	<font color="#808080">"os"</font>

	<font color="#808080">"aitest/internal"</font>
)

<b><u><font color="#000000">func</font></u></b> main() {
	<b><u><font color="#000000">var</font></u></b> versionFlag <b><font color="#000000">bool</font></b>
	flag.BoolVar(&amp;versionFlag, <font color="#808080">"v"</font>, false, <font color="#808080">"print version"</font>)
	dir := flag.String(<font color="#808080">"dir"</font>, <font color="#808080">""</font>, <font color="#808080">"directory to count files in"</font>)
	flag.Parse()

	<b><u><font color="#000000">if</font></u></b> versionFlag {
		fmt.Println(internal.GetVersion())
		<b><u><font color="#000000">return</font></u></b>
	}

	<b><u><font color="#000000">if</font></u></b> *dir != <font color="#808080">""</font> {
		fileCount, err := internal.CountFiles(*dir)
		<b><u><font color="#000000">if</font></u></b> err != nil {
			fmt.Fprintf(os.Stderr, <font color="#808080">"Error counting files: %v\n"</font>, err)
			os.Exit(<font color="#000000">1</font>)
		}
		fmt.Printf(<font color="#808080">"Number of files in directory %s: %d\n"</font>, *dir, fileCount)
	} <b><u><font color="#000000">else</font></u></b> {
		fmt.Println(<font color="#808080">"No directory specified. No count given."</font>)
	}
}
</pre>
<br />
<span>The <span class='inlinecode'>./internal/version.go</span> file:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> internal

<b><u><font color="#000000">var</font></u></b> Version = <font color="#808080">"0.0.0"</font>

<b><u><font color="#000000">func</font></u></b> GetVersion() <b><font color="#000000">string</font></b> {
	<b><u><font color="#000000">return</font></u></b> Version
}
</pre>
<br />
<span>The <span class='inlinecode'>./internal/count.go</span> file:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> internal

<b><u><font color="#000000">import</font></u></b> (
	<font color="#808080">"os"</font>
)

<b><u><font color="#000000">func</font></u></b> CountFiles(dir <b><font color="#000000">string</font></b>) (int, error) {
	files, err := os.ReadDir(dir)
	<b><u><font color="#000000">if</font></u></b> err != nil {
		<b><u><font color="#000000">return</font></u></b> <font color="#000000">0</font>, err
	}

	count := <font color="#000000">0</font>
	<b><u><font color="#000000">for</font></u></b> _, file := <b><u><font color="#000000">range</font></u></b> files {
		<b><u><font color="#000000">if</font></u></b> !file.IsDir() {
			count++
		}
	}

	<b><u><font color="#000000">return</font></u></b> count, nil
}
</pre>
<br />
<span>The code is quite straightforward, especially for generating boilerplate code this will be useful for many use cases!</span><br />
<br />
<h2 style='display: inline' id='in-editor-code-completion'>In-Editor Code Completion</h2><br />
<br />
<span>To leverage Ollama for real-time code completion in my editor, I have integrated it with Helix, my preferred text editor. Helix supports the LSP (Language Server Protocol), which enables advanced code completion features. The <span class='inlinecode'>lsp-ai</span> is an LSP server that can interface with Ollama models for code completion tasks.</span><br />
<br />
<a class='textlink' href='https://helix-editor.com'>https://helix-editor.com</a><br />
<a class='textlink' href='https://github.com/SilasMarvin/lsp-ai'>https://github.com/SilasMarvin/lsp-ai</a><br />
<br />
<h3 style='display: inline' id='installation-of-lsp-ai'>Installation of <span class='inlinecode'>lsp-ai</span></h3><br />
<br />
<span>I installed <span class='inlinecode'>lsp-ai</span> via Rust&#39;s Cargo package manager. (If you don&#39;t have Rust installed, you can install it via Homebrew as well.):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>cargo install lsp-ai
</pre>
<br />
<h3 style='display: inline' id='helix-configuration'>Helix Configuration</h3><br />
<br />
<span>I edited <span class='inlinecode'>~/.config/helix/languages.toml</span> to include:</span><br />
<br />
<pre>
[[language]]
name = "go"
auto-format= true
diagnostic-severity = "hint"
formatter = { command = "goimports" }
language-servers = [ "gopls", "golangci-lint-lsp", "lsp-ai", "gpt" ]
</pre>
<br />
<span>Note that there is also a <span class='inlinecode'>gpt</span> language server configured, which is for GitHub Copilot, but it is out of scope of this blog post. Let&#39;s also configure <span class='inlinecode'>lsp-ai</span> settings in the same file:</span><br />
<br />
<pre>
[language-server.lsp-ai]
command = "lsp-ai"

[language-server.lsp-ai.config.memory]
file_store = { }

[language-server.lsp-ai.config.models.model1]
type = "ollama"
model =  "qwen2.5-coder"

[language-server.lsp-ai.config.models.model2]
type = "ollama"
model = "mistral-nemo:latest"

[language-server.lsp-ai.config.models.model3]
type = "ollama"
model = "deepseek-r1:14b"

[language-server.lsp-ai.config.completion]
model = "model1"

[language-server.lsp-ai.config.completion.parameters]
max_tokens = 64
max_context = 8096

## Configure the messages per your needs
[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "system"
content = "Instructions:\n- You are an AI programming assistant.\n- Given a
piece of code with the cursor location marked by \"&lt;CURSOR&gt;\", replace
\"&lt;CURSOR&gt;\" with the correct code or comment.\n- First, think step-by-step.\n
- Describe your plan for what to build in pseudocode, written out in great
detail.\n- Then output the code replacing the \"&lt;CURSOR&gt;\"\n- Ensure that your
completion fits within the language context of the provided code snippet (e.g.,
Go, Ruby, Bash, Java, Puppet DSL).\n\nRules:\n- Only respond with code or
comments.\n- Only replace \"&lt;CURSOR&gt;\"; do not include any previously written
code.\n- Never include \"&lt;CURSOR&gt;\" in your response\n- If the cursor is within
a comment, complete the comment meaningfully.\n- Handle ambiguous cases by
providing the most contextually appropriate completion.\n- Be consistent with
your responses."

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "func greet(name) {\n    print(f\"Hello, {&lt;CURSOR&gt;}\")\n}"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "assistant"
content = "name"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "func sum(a, b) {\n    return a + &lt;CURSOR&gt;\n}"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "assistant"
content = "b"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "func multiply(a, b int ) int {\n    a * &lt;CURSOR&gt;\n}"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "assistant"
content = "b"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "// &lt;CURSOR&gt;\nfunc add(a, b) {\n    return a + b\n}"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "assistant"
content = "Adds two numbers"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "// This function checks if a number is even\n&lt;CURSOR&gt;"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "assistant"
content = "func is_even(n) {\n    return n % 2 == 0\n}"

[[language-server.lsp-ai.config.completion.parameters.messages]]
role = "user"
content = "{CODE}"
</pre>
<br />
<span>As you can see, I have also added other models, such as Mistral Nemo and DeepSeek R1, so that I can switch between them in Helix. Other than that, the completion parameters are interesting. They define how the LLM should interact with the text in the text editor based on the given examples.</span><br />
<br />
<span>If you want to see more <span class='inlinecode'>lsp-ai</span> configuration examples, they are some for Vim and Helix in the <span class='inlinecode'>lsp-ai</span> git repository!</span><br />
<br />
<h3 style='display: inline' id='code-completion-in-action'>Code completion in action</h3><br />
<br />
<span>The screenshot shows how Ollama&#39;s <span class='inlinecode'>qwen2.5-coder</span> model provides code completion suggestions within the Helix editor. LSP auto-completion is triggered by leaving the cursor at position <span class='inlinecode'>&lt;CURSOR&gt;</span> for a short period in the code snippet, and Ollama responds with relevant completions based on the context.</span><br />
<br />
<a href='./local-coding-LLM-with-ollama/helix-lsp-ai.png'><img alt='Completing the fib-function' title='Completing the fib-function' src='./local-coding-LLM-with-ollama/helix-lsp-ai.png' /></a><br />
<br />
<span>In the LSP auto-completion, the one prefixed with <span class='inlinecode'>ai - </span> was generated by <span class='inlinecode'>qwen2.5-coder</span>, the other ones are from other LSP servers (GitHub Copilot, Go linter, Go language server, etc.).</span><br />
<br />
<span>I found GitHub Copilot to be still faster than <span class='inlinecode'>qwen2.5-coder:14b</span>, but the local LLM one is actually workable for me already. And, as mentioned earlier, things will likely improve in the future regarding local LLMs. So I am excited about the future of local LLMs and coding tools like Ollama and Helix.</span><br />
<br />
<span class='quote'>After trying <span class='inlinecode'>qwen3-coder:30b-a3b-q4_K_M</span> (following the publication of this blog post), I found it to be significantly faster and more capable than the previous model, making it a promising option for local coding tasks. Honestly, even my current local setup already handles routine coding stuff pretty well—better than I expected.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>Will there ever be a time we can run larger models (60B, 100B, ...and larger) on consumer hardware, or even on our phones? We are not quite there yet, but I am optimistic that we will see improvements in the next few years. As hardware capabilities improve and/or become cheaper, and more efficient models are developed (or new techniques will be invented to make language models more effective), the landscape of local AI coding assistants will continue to evolve. </span><br />
<br />
<span>For now, even the models listed in this blog post are very promising already, and they run on consumer-grade hardware (at least in the realm of the initial tests I&#39;ve performed... the ones in this blog post are overly simplistic, though! But they were good for getting started with Ollama and initial demonstration)! I will continue experimenting with Ollama and other local LLMs to see how they can enhance my coding experience. I may cancel my Copilot subscription, which I currently use only for in-editor auto-completion, at some point.</span><br />
<br />
<span>However, truth be told, I don&#39;t think the setup described in this blog post currently matches the performance of commercial models like Claude Code (Sonnet 4, Opus 4), Gemini 2.5 Pro, the OpenAI models and others. Maybe we could get close if we had the high-end hardware needed to run the largest Qwen Coder model available. But, as mentioned already, that is out of reach for occasional coders like me. Furthermore, I want to continue coding manually to some degree, as otherwise I will start to forget how to write for-loops, which would be awkward... However, do we always need the best model when AI can help generate boilerplate or repetitive tasks even with smaller models?</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS (You are currently reading this)</a><br />
<a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 6: Storage</title>
        <link href="https://foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html" />
        <id>https://foo.zone/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html</id>
        <updated>2025-07-13T16:44:29+03:00, last updated Wed 19 Mar 2026</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the sixth blog post about the f3s series for self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-6-storage'>f3s: Kubernetes with FreeBSD - Part 6: Storage</h1><br />
<br />
<span class='quote'>Published at 2025-07-13T16:44:29+03:00, last updated Wed 19 Mar 2026</span><br />
<br />
<span>This is the sixth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-6-storage'>f3s: Kubernetes with FreeBSD - Part 6: Storage</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#additional-storage-capacity'>Additional storage capacity</a></li>
<li>⇢ <a href='#zfs-encryption-keys'>ZFS encryption keys</a></li>
<li>⇢ ⇢ <a href='#ufs-on-usb-keys'>UFS on USB keys</a></li>
<li>⇢ ⇢ <a href='#generating-encryption-keys'>Generating encryption keys</a></li>
<li>⇢ ⇢ <a href='#configuring-zdata-zfs-pool-encryption'>Configuring <span class='inlinecode'>zdata</span> ZFS pool encryption</a></li>
<li>⇢ ⇢ <a href='#migrating-bhyve-vms-to-an-encrypted-bhyve-zfs-volume'>Migrating Bhyve VMs to an encrypted <span class='inlinecode'>bhyve</span> ZFS volume</a></li>
<li>⇢ <a href='#zfs-replication-with-zrepl'>ZFS Replication with <span class='inlinecode'>zrepl</span></a></li>
<li>⇢ ⇢ <a href='#understanding-replication-requirements'>Understanding Replication Requirements</a></li>
<li>⇢ ⇢ <a href='#installing-zrepl'>Installing <span class='inlinecode'>zrepl</span></a></li>
<li>⇢ ⇢ <a href='#configuring-zrepl-on-f1-sink'>Configuring <span class='inlinecode'>zrepl</span> on <span class='inlinecode'>f1</span> (sink)</a></li>
<li>⇢ ⇢ <a href='#enabling-and-starting-zrepl-services'>Enabling and starting <span class='inlinecode'>zrepl</span> services</a></li>
<li>⇢ ⇢ <a href='#monitoring-replication'>Monitoring replication</a></li>
<li>⇢ ⇢ <a href='#verifying-replication-after-reboot'>Verifying replication after reboot</a></li>
<li>⇢ ⇢ <a href='#understanding-failover-limitations-and-design-decisions'>Understanding Failover Limitations and Design Decisions</a></li>
<li>⇢ ⇢ <a href='#mounting-the-nfs-datasets'>Mounting the NFS datasets</a></li>
<li>⇢ <a href='#troubleshooting-files-not-appearing-in-replication'>Troubleshooting: Files not appearing in replication</a></li>
<li>⇢ ⇢ <a href='#configuring-automatic-key-loading-on-boot'>Configuring automatic key loading on boot</a></li>
<li>⇢ ⇢ <a href='#troubleshooting-zrepl-replication-not-working'>Troubleshooting: zrepl Replication Not Working</a></li>
<li>⇢ ⇢ <a href='#check-if-zrepl-services-are-running'>Check if zrepl Services are Running</a></li>
<li>⇢ ⇢ <a href='#check-zrepl-status-for-errors'>Check zrepl Status for Errors</a></li>
<li>⇢ ⇢ <a href='#fixing-no-common-snapshot-errors'>Fixing "No Common Snapshot" Errors</a></li>
<li>⇢ ⇢ <a href='#network-connectivity-issues'>Network Connectivity Issues</a></li>
<li>⇢ ⇢ <a href='#encryption-key-issues'>Encryption Key Issues</a></li>
<li>⇢ ⇢ <a href='#monitoring-ongoing-replication'>Monitoring Ongoing Replication</a></li>
<li>⇢ <a href='#carp-common-address-redundancy-protocol'>CARP (Common Address Redundancy Protocol)</a></li>
<li>⇢ ⇢ <a href='#how-carp-works'>How CARP Works</a></li>
<li>⇢ ⇢ <a href='#configuring-carp'>Configuring CARP</a></li>
<li>⇢ ⇢ <a href='#carp-state-change-notifications'>CARP State Change Notifications</a></li>
<li>⇢ <a href='#nfs-server-configuration'>NFS Server Configuration</a></li>
<li>⇢ ⇢ <a href='#setting-up-nfs-on-f0-primary'>Setting up NFS on <span class='inlinecode'>f0</span> (Primary)</a></li>
<li>⇢ ⇢ <a href='#configuring-stunnel-for-nfs-encryption-with-carp-failover'>Configuring Stunnel for NFS Encryption with CARP Failover</a></li>
<li>⇢ ⇢ <a href='#creating-a-certificate-authority-for-client-authentication'>Creating a Certificate Authority for Client Authentication</a></li>
<li>⇢ ⇢ <a href='#install-and-configure-stunnel-on-f0'>Install and Configure Stunnel on <span class='inlinecode'>f0</span></a></li>
<li>⇢ ⇢ <a href='#setting-up-nfs-on-f1-standby'>Setting up NFS on <span class='inlinecode'>f1</span> (Standby)</a></li>
<li>⇢ ⇢ <a href='#carp-control-script-for-clean-failover'>CARP Control Script for Clean Failover</a></li>
<li>⇢ ⇢ <a href='#carp-management-script'>CARP Management Script</a></li>
<li>⇢ ⇢ <a href='#automatic-failback-after-reboot'>Automatic Failback After Reboot</a></li>
<li>⇢ <a href='#client-configuration-for-nfs-via-stunnel'>Client Configuration for NFS via Stunnel</a></li>
<li>⇢ ⇢ <a href='#configuring-rocky-linux-clients-r0-r1-r2'>Configuring Rocky Linux Clients (<span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>)</a></li>
<li>⇢ ⇢ <a href='#nfsv4-user-mapping-config-on-rocky'>NFSv4 user mapping config on Rocky</a></li>
<li>⇢ ⇢ <a href='#testing-nfs-mount-with-stunnel'>Testing NFS Mount with Stunnel</a></li>
<li>⇢ ⇢ <a href='#testing-carp-failover-with-mounted-clients-and-stale-file-handles'>Testing CARP Failover with mounted clients and stale file handles:</a></li>
<li>⇢ ⇢ <a href='#complete-failover-test'>Complete Failover Test</a></li>
<li>⇢ <a href='#update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</a></li>
<li>⇢ ⇢ <a href='#upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</a></li>
<li>⇢ ⇢ <a href='#upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
<li>⇢ <a href='#future-storage-explorations'>Future Storage Explorations</a></li>
<li>⇢ ⇢ <a href='#minio-for-s3-compatible-object-storage'>MinIO for S3-Compatible Object Storage</a></li>
<li>⇢ ⇢ <a href='#moosefs-for-distributed-high-availability'>MooseFS for Distributed High Availability</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In the previous posts, we set up a WireGuard mesh network. In the future, we will also set up a Kubernetes cluster. Kubernetes workloads often require persistent storage for databases, configuration files, and application data. Local storage on each node has significant limitations:</span><br />
<br />
<ul>
<li>No data sharing: Pods (once we run Kubernetes) on different nodes can&#39;t access the same data</li>
<li>Pod mobility: If a pod moves to another node, it loses access to its data</li>
<li>No redundancy: Hardware failure means data loss</li>
</ul><br />
<span>This post implements a robust storage solution using:</span><br />
<br />
<ul>
<li>CARP: For high availability with automatic IP failover</li>
<li>NFS over stunnel: For secure, encrypted network storage</li>
<li>ZFS: For data integrity, encryption, and efficient snapshots</li>
<li><span class='inlinecode'>zrepl</span>: For continuous ZFS replication between nodes</li>
</ul><br />
<span>The result is a highly available, encrypted storage system that survives node failures while providing shared storage to all Kubernetes pods.</span><br />
<br />
<span>Other than what was mentioned in the first post of this blog series, we aren&#39;t using HAST, but <span class='inlinecode'>zrepl</span> for data replication. Read more about it later in this blog post.</span><br />
<br />
<h2 style='display: inline' id='additional-storage-capacity'>Additional storage capacity</h2><br />
<br />
<span>We add 1 TB of additional storage to each of the nodes (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>) in the form of an SSD drive. The Beelink mini PCs have enough space in the chassis for the extra space.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-6/drives.jpg'><img src='./f3s-kubernetes-with-freebsd-part-6/drives.jpg' /></a><br />
<br />
<span>Upgrading the storage was as easy as unscrewing, plugging the drive in, and then screwing it back together again. The procedure was uneventful! We&#39;re using two different SSD models (Samsung 870 EVO and Crucial BX500) to avoid simultaneous failures from the same manufacturing batch.</span><br />
<br />
<span>We then create the <span class='inlinecode'>zdata</span> ZFS pool on all three nodes:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zpool create -m /data zdata /dev/ada<font color="#000000">1</font>
paul@f0:~ % zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zdata   928G  <font color="#000000">12</font>.1M   928G        -         -     <font color="#000000">0</font>%     <font color="#000000">0</font>%  <font color="#000000">1</font>.00x    ONLINE  -
zroot   472G  <font color="#000000">29</font>.0G   443G        -         -     <font color="#000000">0</font>%     <font color="#000000">6</font>%  <font color="#000000">1</font>.00x    ONLINE  -

paul@f0:/ % doas camcontrol devlist
&lt;512GB SSD D910R170&gt;               at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
&lt;Samsung SSD <font color="#000000">870</font> EVO 1TB SVT03B6Q&gt;  at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
paul@f0:/ %
</pre>
<br />
<span>To verify that we have a different SSD on the second node (the third node has the same drive as the first):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:/ % doas camcontrol devlist
&lt;512GB SSD D910R170&gt;               at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
&lt;CT1000BX500SSD1 M6CR072&gt;          at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
</pre>
<br />
<h2 style='display: inline' id='zfs-encryption-keys'>ZFS encryption keys</h2><br />
<br />
<span>ZFS native encryption requires encryption keys to unlock datasets. We need a secure method to store these keys that balances security with operational needs:</span><br />
<br />
<ul>
<li>Security: Keys must not be stored on the same disks they encrypt</li>
<li>Availability: Keys must be available at boot for automatic mounting</li>
<li>Portability: Keys should be easily moved between systems for recovery</li>
</ul><br />
<span>Using USB flash drives as hardware key storage provides a convenient and elegant solution. The encrypted data is unreadable without physical access to the USB key, protecting against disk theft or improper disposal. In production environments, you may use enterprise key management systems; however, for a home lab, USB keys offer good security with minimal complexity.</span><br />
<br />
<h3 style='display: inline' id='ufs-on-usb-keys'>UFS on USB keys</h3><br />
<br />
<span>We&#39;ll format the USB drives with UFS (Unix File System) rather than ZFS for simplicity. There is no need to use ZFS.</span><br />
<br />
<span>Let&#39;s see the USB keys:</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-6/usbkeys1.jpg'><img alt='USB keys' title='USB keys' src='./f3s-kubernetes-with-freebsd-part-6/usbkeys1.jpg' /></a><br />
<br />
<span>To verify that the USB key (flash disk) is there:</span><br />
<br />
<pre>
paul@f0:/ % doas camcontrol devlist
&lt;512GB SSD D910R170&gt;               at scbus0 target 0 lun 0 (pass0,ada0)
&lt;Samsung SSD 870 EVO 1TB SVT03B6Q&gt;  at scbus1 target 0 lun 0 (pass1,ada1)
&lt;Generic Flash Disk 8.07&gt;          at scbus2 target 0 lun 0 (da0,pass2)
paul@f0:/ %
</pre>
<br />
<span>Let&#39;s create the UFS file system and mount it (done on all three nodes <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/ % doas newfs /dev/da<font color="#000000">0</font>
/dev/da<font color="#000000">0</font>: <font color="#000000">15000</font>.0MB (<font color="#000000">30720000</font> sectors) block size <font color="#000000">32768</font>, fragment size <font color="#000000">4096</font>
        using <font color="#000000">24</font> cylinder groups of <font color="#000000">625</font>.22MB, <font color="#000000">20007</font> blks, <font color="#000000">80128</font> inodes.
        with soft updates
super-block backups (<b><u><font color="#000000">for</font></u></b> fsck_ffs -b <i><font color="silver">#) at:</font></i>
 <font color="#000000">192</font>, <font color="#000000">1280640</font>, <font color="#000000">2561088</font>, <font color="#000000">3841536</font>, <font color="#000000">5121984</font>, <font color="#000000">6402432</font>, <font color="#000000">7682880</font>, <font color="#000000">8963328</font>, <font color="#000000">10243776</font>,
<font color="#000000">11524224</font>, <font color="#000000">12804672</font>, <font color="#000000">14085120</font>, <font color="#000000">15365568</font>, <font color="#000000">16646016</font>, <font color="#000000">17926464</font>, <font color="#000000">19206912</font>,k <font color="#000000">20487360</font>,
...

paul@f0:/ % echo <font color="#808080">'/dev/da0 /keys ufs rw 0 2'</font> | doas tee -a /etc/fstab
/dev/da<font color="#000000">0</font> /keys ufs rw <font color="#000000">0</font> <font color="#000000">2</font>
paul@f0:/ % doas mkdir /keys
paul@f0:/ % doas mount /keys
paul@f0:/ % df | grep keys
/dev/da<font color="#000000">0</font>             <font color="#000000">14877596</font>       <font color="#000000">8</font>  <font color="#000000">13687384</font>     <font color="#000000">0</font>%    /keys
</pre>
<br />
<a href='./f3s-kubernetes-with-freebsd-part-6/usbkeys2.jpg'><img alt='USB keys stuck in' title='USB keys stuck in' src='./f3s-kubernetes-with-freebsd-part-6/usbkeys2.jpg' /></a><br />
<br />
<h3 style='display: inline' id='generating-encryption-keys'>Generating encryption keys</h3><br />
<br />
<span>The following keys will later be used to encrypt the ZFS file systems. They will be stored on all three nodes, serving as a backup in case one of the keys is lost or corrupted. When we later replicate encrypted ZFS volumes from one node to another, the keys must also be available on the destination node.</span><br />
<br />
<pre>
paul@f0:/keys % doas openssl rand -out /keys/f0.lan.buetow.org:bhyve.key 32
paul@f0:/keys % doas openssl rand -out /keys/f1.lan.buetow.org:bhyve.key 32
paul@f0:/keys % doas openssl rand -out /keys/f2.lan.buetow.org:bhyve.key 32
paul@f0:/keys % doas openssl rand -out /keys/f0.lan.buetow.org:zdata.key 32
paul@f0:/keys % doas openssl rand -out /keys/f1.lan.buetow.org:zdata.key 32
paul@f0:/keys % doas openssl rand -out /keys/f2.lan.buetow.org:zdata.key 32
paul@f0:/keys % doas chown root *
paul@f0:/keys % doas chmod 400 *

paul@f0:/keys % ls -l
total 20
*r--------  1 root wheel 32 May 25 13:07 f0.lan.buetow.org:bhyve.key
*r--------  1 root wheel 32 May 25 13:07 f1.lan.buetow.org:bhyve.key
*r--------  1 root wheel 32 May 25 13:07 f2.lan.buetow.org:bhyve.key
*r--------  1 root wheel 32 May 25 13:07 f0.lan.buetow.org:zdata.key
*r--------  1 root wheel 32 May 25 13:07 f1.lan.buetow.org:zdata.key
*r--------  1 root wheel 32 May 25 13:07 f2.lan.buetow.org:zdata.key
</pre>
<br />
<span>After creation, these are copied to the other two nodes, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>, into the <span class='inlinecode'>/keys</span> partition (I won&#39;t provide the commands here; create a tarball, copy it over, and extract it on the destination nodes).</span><br />
<br />
<h3 style='display: inline' id='configuring-zdata-zfs-pool-encryption'>Configuring <span class='inlinecode'>zdata</span> ZFS pool encryption</h3><br />
<br />
<span>Let&#39;s encrypt our <span class='inlinecode'>zdata</span> ZFS pool. We are not encrypting the whole pool, but everything within the <span class='inlinecode'>zdata/enc</span> data set:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/keys % doas zfs create -o encryption=on -o keyformat=raw -o \
  keylocation=file:///keys/`hostname`:zdata.key zdata/enc
paul@f0:/ % zfs list | grep zdata
zdata                                          836K   899G    96K  /data
zdata/enc                                      200K   899G   200K  /data/enc

paul@f0:/keys % zfs get all zdata/enc | grep -E -i <font color="#808080">'(encryption|key)'</font>
zdata/enc  encryption            aes-<font color="#000000">256</font>-gcm                               -
zdata/enc  keylocation           file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key  <b><u><font color="#000000">local</font></u></b>
zdata/enc  keyformat             raw                                       -
zdata/enc  encryptionroot        zdata/enc                                 -
zdata/enc  keystatus             available                                 -
</pre>
<br />
<span>All future data sets within <span class='inlinecode'>zdata/enc</span> will inherit the same encryption key.</span><br />
<br />
<h3 style='display: inline' id='migrating-bhyve-vms-to-an-encrypted-bhyve-zfs-volume'>Migrating Bhyve VMs to an encrypted <span class='inlinecode'>bhyve</span> ZFS volume</h3><br />
<br />
<span>We set up Bhyve VMs in a previous blog post. Their ZFS data sets rely on <span class='inlinecode'>zroot</span>, which is the default ZFS pool on the internal 512GB NVME drive. They aren&#39;t encrypted yet, so we encrypt the VM data sets as well now. To do so, we first shut down the VMs on all three nodes:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/keys % doas vm stop rocky
Sending ACPI shutdown to rocky

paul@f0:/keys % doas vm list
NAME     DATASTORE  LOADER     CPU  MEMORY  VNC  AUTO     STATE
rocky    default    uefi       <font color="#000000">4</font>    14G     -    Yes [<font color="#000000">1</font>]  Stopped
</pre>
<br />
<span>After this, we rename the unencrypted data set to <span class='inlinecode'>_old</span>, create a new encrypted data set, and also snapshot it as <span class='inlinecode'>@hamburger</span>.</span><br />
<span>  </span><br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/keys % doas zfs rename zroot/bhyve zroot/bhyve_old
paul@f0:/keys % doas zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/mnt zroot/bhyve_old
paul@f0:/keys % doas zfs snapshot zroot/bhyve_old/rocky@hamburger

paul@f0:/keys % doas zfs create -o encryption=on -o keyformat=raw -o \
  keylocation=file:///keys/`hostname`:bhyve.key zroot/bhyve
paul@f0:/keys % doas zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/zroot/bhyve zroot/bhyve
paul@f0:/keys % doas zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/zroot/bhyve/rocky zroot/bhyve/rocky
</pre>
<br />
<span>Once done, we import the snapshot into the encrypted dataset and also copy some other metadata files from <span class='inlinecode'>vm-bhyve</span> back over.</span><br />
<br />
<pre>
paul@f0:/keys % doas zfs send zroot/bhyve_old/rocky@hamburger | \
  doas zfs recv zroot/bhyve/rocky
paul@f0:/keys % doas cp -Rp /mnt/.config /zroot/bhyve/
paul@f0:/keys % doas cp -Rp /mnt/.img /zroot/bhyve/
paul@f0:/keys % doas cp -Rp /mnt/.templates /zroot/bhyve/
paul@f0:/keys % doas cp -Rp /mnt/.iso /zroot/bhyve/
</pre>
<br />
<span>We also have to make encrypted ZFS data sets mount automatically on boot:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/keys % doas sysrc zfskeys_enable=YES
zfskeys_enable:  -&gt; YES
paul@f0:/keys % doas vm init
paul@f0:/keys % doas reboot
.
.
.
paul@f0:~ % doas vm list
paul@f0:~ % doas vm list
NAME     DATASTORE  LOADER     CPU  MEMORY  VNC           AUTO     STATE
rocky    default    uefi       <font color="#000000">4</font>    14G     <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font>  Yes [<font color="#000000">1</font>]  Running (<font color="#000000">2265</font>)
</pre>
<br />
<span>As you can see, the VM is running. This means the encrypted <span class='inlinecode'>zroot/bhyve</span> was mounted successfully after the reboot! Now we can destroy the old, unencrypted, and now unused bhyve dataset:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zfs destroy -R zroot/bhyve_old
</pre>
<br />
<span>To verify once again that <span class='inlinecode'>zroot/bhyve</span> and <span class='inlinecode'>zroot/bhyve/rocky</span> are now both encrypted, we run:</span><br />
<span>  </span><br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % zfs get all zroot/bhyve | grep -E <font color="#808080">'(encryption|key)'</font>
zroot/bhyve  encryption            aes-<font color="#000000">256</font>-gcm                               -
zroot/bhyve  keylocation           file:///keys/f<font color="#000000">0</font>.lan.buetow.org:bhyve.key  <b><u><font color="#000000">local</font></u></b>
zroot/bhyve  keyformat             raw                                       -
zroot/bhyve  encryptionroot        zroot/bhyve                               -
zroot/bhyve  keystatus             available                                 -

paul@f0:~ % zfs get all zroot/bhyve/rocky | grep -E <font color="#808080">'(encryption|key)'</font>
zroot/bhyve/rocky  encryption            aes-<font color="#000000">256</font>-gcm            -
zroot/bhyve/rocky  keylocation           none                   default
zroot/bhyve/rocky  keyformat             raw                    -
zroot/bhyve/rocky  encryptionroot        zroot/bhyve            -
zroot/bhyve/rocky  keystatus             available              -
</pre>
<br />
<h2 style='display: inline' id='zfs-replication-with-zrepl'>ZFS Replication with <span class='inlinecode'>zrepl</span></h2><br />
<br />
<span>Data replication is the cornerstone of high availability. While CARP handles IP failover (see later in this post), we need continuous data replication to ensure the backup server has current data when it becomes active. Without replication, failover would result in data loss or require shared storage (like iSCSI), which introduces a single point of failure.</span><br />
<br />
<h3 style='display: inline' id='understanding-replication-requirements'>Understanding Replication Requirements</h3><br />
<br />
<span>Our storage system has different replication needs:</span><br />
<br />
<ul>
<li>NFS data (<span class='inlinecode'>/data/nfs/k3svolumes</span>): Soon, it will contain active Kubernetes persistent volumes. Needs frequent replication (every minute) to minimise data loss during failover.</li>
<li>VM data (<span class='inlinecode'>/zroot/bhyve/freebsd</span>): Contains VM images that change less frequently. Can tolerate longer replication intervals (every 10 minutes).</li>
</ul><br />
<span>The 1-minute replication window is perfectly acceptable for my personal use cases. This isn&#39;t a high-frequency trading system or a real-time database—it&#39;s storage for personal projects, development work, and home lab experiments. Losing at most 1 minute of work in a disaster scenario is a reasonable trade-off for the reliability and simplicity of snapshot-based replication. Additionally, in the case of a "1 minute of data loss," I would likely still have the data available on the client side.</span><br />
<br />
<span>Why use <span class='inlinecode'>zrepl</span> instead of HAST? While HAST (Highly Available Storage) is FreeBSD&#39;s native solution for high-availability storage and supports synchronous replication—thus eliminating the mentioned 1-minute window—I&#39;ve chosen <span class='inlinecode'>zrepl</span> for several important reasons:</span><br />
<br />
<ul>
<li>HAST can cause ZFS corruption: HAST operates at the block level and doesn&#39;t understand ZFS&#39;s transactional semantics. During failover, in-flight transactions can lead to corrupted zpools. I&#39;ve experienced this firsthand (I am confident I have configured something wrong) - the automatic failover would trigger while ZFS was still writing, resulting in an unmountable pool.</li>
<li>ZFS-aware replication: <span class='inlinecode'>zrepl</span> understands ZFS datasets and snapshots. It replicates at the dataset level, ensuring each snapshot is a consistent point-in-time copy. This is fundamentally safer than block-level replication.</li>
<li>Snapshot history: With <span class='inlinecode'>zrepl</span>, you get multiple recovery points (every minute for NFS data in our setup). If corruption occurs, you can roll back to any previous snapshot. HAST only gives you the current state.</li>
<li>Easier recovery: When something goes wrong with <span class='inlinecode'>zrepl</span>, you still have intact snapshots on both sides. With HAST, a corrupted primary often means a corrupted secondary as well.</li>
</ul><br />
<a class='textlink' href='https://wiki.freebsd.org/HighlyAvailableStorage'>FreeBSD HAST</a><br />
<br />
<h3 style='display: inline' id='installing-zrepl'>Installing <span class='inlinecode'>zrepl</span></h3><br />
<br />
<span>First, install <span class='inlinecode'>zrepl</span> on both hosts involved (we will replicate data from <span class='inlinecode'>f0</span> to <span class='inlinecode'>f1</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install -y zrepl
</pre>
<br />
<span>Then, we verify the pools and datasets on both hosts:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0</font></i>
paul@f0:~ % doas zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zdata   928G  <font color="#000000">1</font>.03M   928G        -         -     <font color="#000000">0</font>%     <font color="#000000">0</font>%  <font color="#000000">1</font>.00x    ONLINE  -
zroot   472G  <font color="#000000">26</font>.7G   445G        -         -     <font color="#000000">0</font>%     <font color="#000000">5</font>%  <font color="#000000">1</font>.00x    ONLINE  -

paul@f0:~ % doas zfs list -r zdata/enc
NAME        USED  AVAIL  REFER  MOUNTPOINT
zdata/enc   200K   899G   200K  /data/enc

<i><font color="silver"># On f1</font></i>
paul@f1:~ % doas zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zdata   928G   956K   928G        -         -     <font color="#000000">0</font>%     <font color="#000000">0</font>%  <font color="#000000">1</font>.00x    ONLINE  -
zroot   472G  <font color="#000000">11</font>.7G   460G        -         -     <font color="#000000">0</font>%     <font color="#000000">2</font>%  <font color="#000000">1</font>.00x    ONLINE  -

paul@f1:~ % doas zfs list -r zdata/enc
NAME        USED  AVAIL  REFER  MOUNTPOINT
zdata/enc   200K   899G   200K  /data/enc
</pre>
<br />
<span>Since we have a WireGuard tunnel between <span class='inlinecode'>f0</span> and f1, we&#39;ll use TCP transport over the secure tunnel instead of SSH. First, check the WireGuard IP addresses:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Check WireGuard interface IPs</font></i>
paul@f0:~ % ifconfig wg0 | grep inet
	inet <font color="#000000">192.168</font>.<font color="#000000">2.130</font> netmask <font color="#000000">0xffffff00</font>

paul@f1:~ % ifconfig wg0 | grep inet
	inet <font color="#000000">192.168</font>.<font color="#000000">2.131</font> netmask <font color="#000000">0xffffff00</font>
</pre>
<br />
<span>Let&#39;s create a dedicated dataset for NFS data that will be replicated:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Create the nfsdata dataset that will hold all data exposed via NFS</font></i>
paul@f0:~ % doas zfs create zdata/enc/nfsdata
</pre>
<br />
<span>Afterwards, we create the <span class='inlinecode'>zrepl</span> configuration on <span class='inlinecode'>f0</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas tee /usr/local/etc/zrepl/zrepl.yml &lt;&lt;<font color="#808080">'EOF'</font>
global:
  logging:
    - <b><u><font color="#000000">type</font></u></b>: stdout
      level: info
      format: human

<b><u><font color="#000000">jobs</font></u></b>:
  - name: f0_to_f1_nfsdata
    <b><u><font color="#000000">type</font></u></b>: push
    connect:
      <b><u><font color="#000000">type</font></u></b>: tcp
      address: <font color="#808080">"192.168.2.131:8888"</font>
    filesystems:
      <font color="#808080">"zdata/enc/nfsdata"</font>: <b><u><font color="#000000">true</font></u></b>
    send:
      encrypted: <b><u><font color="#000000">true</font></u></b>
    snapshotting:
      <b><u><font color="#000000">type</font></u></b>: periodic
      prefix: zrepl_
      interval: 1m
    pruning:
      keep_sender:
        - <b><u><font color="#000000">type</font></u></b>: last_n
          count: <font color="#000000">10</font>
        - <b><u><font color="#000000">type</font></u></b>: grid
          grid: 4x7d | 6x30d
          regex: <font color="#808080">"^zrepl_.*"</font>
      keep_receiver:
        - <b><u><font color="#000000">type</font></u></b>: last_n
          count: <font color="#000000">10</font>
        - <b><u><font color="#000000">type</font></u></b>: grid
          grid: 4x7d | 6x30d
          regex: <font color="#808080">"^zrepl_.*"</font>

  - name: f0_to_f1_freebsd
    <b><u><font color="#000000">type</font></u></b>: push
    connect:
      <b><u><font color="#000000">type</font></u></b>: tcp
      address: <font color="#808080">"192.168.2.131:8888"</font>
    filesystems:
      <font color="#808080">"zroot/bhyve/freebsd"</font>: <b><u><font color="#000000">true</font></u></b>
    send:
      encrypted: <b><u><font color="#000000">true</font></u></b>
    snapshotting:
      <b><u><font color="#000000">type</font></u></b>: periodic
      prefix: zrepl_
      interval: 10m
    pruning:
      keep_sender:
        - <b><u><font color="#000000">type</font></u></b>: last_n
          count: <font color="#000000">10</font>
        - <b><u><font color="#000000">type</font></u></b>: grid
          grid: 4x7d
          regex: <font color="#808080">"^zrepl_.*"</font>
      keep_receiver:
        - <b><u><font color="#000000">type</font></u></b>: last_n
          count: <font color="#000000">10</font>
        - <b><u><font color="#000000">type</font></u></b>: grid
          grid: 4x7d
          regex: <font color="#808080">"^zrepl_.*"</font>
EOF
</pre>
<br />
<span> We&#39;re using two separate replication jobs with different intervals:</span><br />
<br />
<ul>
<li><span class='inlinecode'>f0_to_f1_nfsdata</span>: Replicates NFS data every minute for faster failover recovery</li>
<li><span class='inlinecode'>f0_to_f1_freebsd</span>: Replicates FreeBSD VM every ten minutes (less critical)</li>
</ul><br />
<span>The FreeBSD VM is only used for development purposes, so it doesn&#39;t require as frequent replication as the NFS data. It&#39;s off-topic to this blog series, but it showcases how <span class='inlinecode'>zrepl</span>&#39;s flexibility in handling different datasets with varying replication needs.</span><br />
<br />
<span>Furthermore:</span><br />
<br />
<ul>
<li>We&#39;re specifically replicating <span class='inlinecode'>zdata/enc/nfsdata</span> instead of the entire <span class='inlinecode'>zdata/enc</span> dataset. This dedicated dataset will contain all the data we later want to expose via NFS, keeping a clear separation between replicated NFS data and other local encrypted data.</li>
<li>We use <span class='inlinecode'>send: encrypted: true</span> to keep the replication stream encrypted. While WireGuard already encrypts in transit, this provides additional protection. For reduced CPU overhead, you could set <span class='inlinecode'>encrypted: false</span> since the tunnel is secure.</li>
</ul><br />
<h3 style='display: inline' id='configuring-zrepl-on-f1-sink'>Configuring <span class='inlinecode'>zrepl</span> on <span class='inlinecode'>f1</span> (sink)</h3><br />
<br />
<span>On <span class='inlinecode'>f1</span> (the sink, meaning it&#39;s the node receiving the replication data), we configure <span class='inlinecode'>zrepl</span> to receive the data as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># First, create a dedicated sink dataset</font></i>
paul@f1:~ % doas zfs create zdata/sink

paul@f1:~ % doas tee /usr/local/etc/zrepl/zrepl.yml &lt;&lt;<font color="#808080">'EOF'</font>
global:
  logging:
    - <b><u><font color="#000000">type</font></u></b>: stdout
      level: info
      format: human

<b><u><font color="#000000">jobs</font></u></b>:
  - name: sink
    <b><u><font color="#000000">type</font></u></b>: sink
    serve:
      <b><u><font color="#000000">type</font></u></b>: tcp
      listen: <font color="#808080">"192.168.2.131:8888"</font>
      clients:
        <font color="#808080">"192.168.2.130"</font>: <font color="#808080">"f0"</font>
    recv:
      placeholder:
        encryption: inherit
    root_fs: <font color="#808080">"zdata/sink"</font>
EOF
</pre>
<br />
<h3 style='display: inline' id='enabling-and-starting-zrepl-services'>Enabling and starting <span class='inlinecode'>zrepl</span> services</h3><br />
<br />
<span>We then enable and start <span class='inlinecode'>zrepl</span> on both hosts via:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0</font></i>
paul@f0:~ % doas sysrc zrepl_enable=YES
zrepl_enable:  -&gt; YES
paul@f0:~ % doas service `zrepl` start
Starting zrepl.

<i><font color="silver"># On f1</font></i>
paul@f1:~ % doas sysrc zrepl_enable=YES
zrepl_enable:  -&gt; YES
paul@f1:~ % doas service `zrepl` start
Starting zrepl.
</pre>
<br />
<span>To check the replication status, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0, check `zrepl` status (use raw mode for non-tty)</font></i>
paul@f0:~ % doas pkg install jq
paul@f0:~ % doas zrepl status --mode raw | grep -A<font color="#000000">2</font> <font color="#808080">"Replication"</font> | jq .
<font color="#808080">"Replication"</font>:{<font color="#808080">"StartAt"</font>:<font color="#808080">"2025-07-01T22:31:48.712143123+03:00"</font>...

<i><font color="silver"># Check if services are running</font></i>
paul@f0:~ % doas service zrepl status
zrepl is running as pid <font color="#000000">2649</font>.

paul@f1:~ % doas service zrepl status
zrepl is running as pid <font color="#000000">2574</font>.

<i><font color="silver"># Check for `zrepl` snapshots on source</font></i>
paul@f0:~ % doas zfs list -t snapshot -r zdata/enc | grep zrepl
zdata/enc@zrepl_20250701_193148_000    0B      -   176K  -

<i><font color="silver"># On f1, verify the replicated datasets  </font></i>
paul@f1:~ % doas zfs list -r zdata | grep f0
zdata/f<font color="#000000">0</font>             576K   899G   200K  none
zdata/f<font color="#000000">0</font>/zdata       376K   899G   200K  none
zdata/f<font color="#000000">0</font>/zdata/enc   176K   899G   176K  none

<i><font color="silver"># Check replicated snapshots on f1</font></i>
paul@f1:~ % doas zfs list -t snapshot -r zdata | grep zrepl
zdata/f<font color="#000000">0</font>/zdata/enc@zrepl_20250701_193148_000     0B      -   176K  -
zdata/f<font color="#000000">0</font>/zdata/enc@zrepl_20250701_194148_000     0B      -   176K  -
.
.
.
</pre>
<br />
<h3 style='display: inline' id='monitoring-replication'>Monitoring replication</h3><br />
<br />
<span>You can monitor the replication progress with:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zrepl status
</pre>
<br />
<a href='./f3s-kubernetes-with-freebsd-part-6/zrepl.png'><img alt='zrepl status' title='zrepl status' src='./f3s-kubernetes-with-freebsd-part-6/zrepl.png' /></a><br />
<br />
<span>With this setup, both <span class='inlinecode'>zdata/enc/nfsdata</span> and <span class='inlinecode'>zroot/bhyve/freebsd</span> on <span class='inlinecode'>f0</span> will be automatically replicated to <span class='inlinecode'>f1</span> every 1 minute (or 10 minutes in the case of the FreeBSD VM), with encrypted snapshots preserved on both sides. The pruning policy ensures that we keep the last 10 snapshots while managing disk space efficiently.</span><br />
<br />
<span>The replicated data appears on <span class='inlinecode'>f1</span> under <span class='inlinecode'>zdata/sink/</span> with the source host and dataset hierarchy preserved:</span><br />
<br />
<ul>
<li><span class='inlinecode'>zdata/enc/nfsdata</span> → <span class='inlinecode'>zdata/sink/f0/zdata/enc/nfsdata</span></li>
<li><span class='inlinecode'>zroot/bhyve/freebsd</span> → <span class='inlinecode'>zdata/sink/f0/zroot/bhyve/freebsd</span></li>
</ul><br />
<span>This is by design - <span class='inlinecode'>zrepl</span> preserves the complete path from the source to ensure there are no conflicts when replicating from multiple sources.</span><br />
<br />
<h3 style='display: inline' id='verifying-replication-after-reboot'>Verifying replication after reboot</h3><br />
<br />
<span>The <span class='inlinecode'>zrepl</span> service is configured to start automatically at boot. After rebooting both hosts:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % uptime
<font color="#000000">11</font>:17PM  up <font color="#000000">1</font> min, <font color="#000000">0</font> users, load averages: <font color="#000000">0.16</font>, <font color="#000000">0.06</font>, <font color="#000000">0.02</font>

paul@f0:~ % doas service `zrepl` status
zrepl is running as pid <font color="#000000">2366</font>.

paul@f1:~ % doas service `zrepl` status
zrepl is running as pid <font color="#000000">2309</font>.

<i><font color="silver"># Check that new snapshots are being created and replicated</font></i>
paul@f0:~ % doas zfs list -t snapshot | grep `zrepl` | tail -<font color="#000000">2</font>
zdata/enc/nfsdata@zrepl_20250701_202530_000                0B      -   200K  -
zroot/bhyve/freebsd@zrepl_20250701_202530_000               0B      -  <font color="#000000">2</font>.97G  -
.
.
.

paul@f1:~ % doas zfs list -t snapshot -r zdata/sink | grep <font color="#000000">202530</font>
zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata@zrepl_20250701_202530_000      0B      -   176K  -
zdata/sink/f<font color="#000000">0</font>/zroot/bhyve/freebsd@zrepl_20250701_202530_000     0B      -  <font color="#000000">2</font>.97G  -
.
.
.
</pre>
<br />
<span>The timestamps confirm that replication resumed automatically after the reboot, ensuring continuous data protection. We can also write a test file to the NFS data directory on <span class='inlinecode'>f0</span> and verify whether it appears on <span class='inlinecode'>f1</span> after a minute.</span><br />
<br />
<h3 style='display: inline' id='understanding-failover-limitations-and-design-decisions'>Understanding Failover Limitations and Design Decisions</h3><br />
<br />
<span>Our system intentionally fails over to a read-only copy of the replica in the event of the primary&#39;s failure. This is due to the nature of <span class='inlinecode'>zrepl</span>, which only replicates data in one direction. If we mount the data set on the sink node in read-write mode, it would cause the ZFS dataset to diverge from the original, and the replication would break. It can still be mounted read-write on the sink node in case of a genuine issue on the primary node, but that step is left intentionally manual. Therefore, we don&#39;t need to fix the replication later on manually.</span><br />
<br />
<span>So in summary:</span><br />
<br />
<ul>
<li>Split-brain prevention: Automatic failover to a read-write copy can cause both nodes to become active simultaneously if network communication fails. This leads to data divergence that&#39;s extremely difficult to resolve.</li>
<li>False positive protection: Temporary network issues or high load can trigger unwanted failovers. Manual intervention ensures that failovers occur only when truly necessary.</li>
<li>Data integrity over availability: For storage systems, data consistency is paramount. A few minutes of downtime is preferable to data corruption in this specific use case.</li>
<li>Simplified recovery: With manual failover, you always know which dataset is authoritative, making recovery more straightforward.</li>
</ul><br />
<h3 style='display: inline' id='mounting-the-nfs-datasets'>Mounting the NFS datasets</h3><br />
<br />
<span>To make the NFS data accessible on both nodes, we need to mount it. On <span class='inlinecode'>f0</span>, this is straightforward:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0 - set mountpoint for the primary nfsdata</font></i>
paul@f0:~ % doas zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/data/nfs zdata/enc/nfsdata
paul@f0:~ % doas mkdir -p /data/nfs

<i><font color="silver"># Verify it's mounted</font></i>
paul@f0:~ % df -h /data/nfs
Filesystem           Size    Used   Avail Capacity  Mounted on
zdata/enc/nfsdata    899G    204K    899G     <font color="#000000">0</font>%    /data/nfs
</pre>
<br />
<span>On <span class='inlinecode'>f1</span>, we need to handle the encryption key and mount the standby copy:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f1 - first check encryption status</font></i>
paul@f1:~ % doas zfs get keystatus zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
NAME                             PROPERTY   VALUE        SOURCE
zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata  keystatus  unavailable  -

<i><font color="silver"># Load the encryption key (using f0's key stored on the USB)</font></i>
paul@f1:~ % doas zfs load-key -L file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key \
    zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># Set mountpoint and mount (same path as f0 for easier failover)</font></i>
paul@f1:~ % doas mkdir -p /data/nfs
paul@f1:~ % doas zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/data/nfs zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
paul@f1:~ % doas zfs mount zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># Make it read-only to prevent accidental writes that would break replication</font></i>
paul@f1:~ % doas zfs <b><u><font color="#000000">set</font></u></b> <b><u><font color="#000000">readonly</font></u></b>=on zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># Verify</font></i>
paul@f1:~ % df -h /data/nfs
Filesystem                         Size    Used   Avail Capacity  Mounted on
zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata    896G    204K    896G     <font color="#000000">0</font>%    /data/nfs
</pre>
<br />
<span>Note: The dataset is mounted at the same path (<span class='inlinecode'>/data/nfs</span>) on both hosts to simplify failover procedures. The dataset on <span class='inlinecode'>f1</span> is set to <span class='inlinecode'>readonly=on</span> to prevent accidental modifications, which, as mentioned earlier, would break replication. If we did, replication from <span class='inlinecode'>f0</span> to <span class='inlinecode'>f1</span> would fail like this:</span><br />
<br />
<span class='quote'>cannot receive incremental stream: destination zdata/sink/f0/zdata/enc/nfsdata has been modified since most recent snapshot </span><br />
<br />
<span>To fix a broken replication after accidental writes, we can do:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Option 1: Rollback to the last common snapshot (loses local changes)</font></i>
paul@f1:~ % doas zfs rollback zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata@zrepl_20250701_204054_000

<i><font color="silver"># Option 2: Make it read-only to prevent accidents again</font></i>
paul@f1:~ % doas zfs <b><u><font color="#000000">set</font></u></b> <b><u><font color="#000000">readonly</font></u></b>=on zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
</pre>
<br />
<span>And replication should work again!</span><br />
<br />
<h2 style='display: inline' id='troubleshooting-files-not-appearing-in-replication'>Troubleshooting: Files not appearing in replication</h2><br />
<br />
<span>If you write files to <span class='inlinecode'>/data/nfs/</span> on <span class='inlinecode'>f0</span> but they don&#39;t appear on <span class='inlinecode'>f1</span>, check if the dataset is mounted on <span class='inlinecode'>f0</span>?</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zfs list -o name,mountpoint,mounted | grep nfsdata
zdata/enc/nfsdata                             /data/nfs             yes
</pre>
<br />
<span>If it shows <span class='inlinecode'>no</span>, the dataset isn&#39;t mounted! This means files are being written to the root filesystem, not ZFS. Next, we should check whether the encryption key is loaded:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zfs get keystatus zdata/enc/nfsdata
NAME               PROPERTY   VALUE        SOURCE
zdata/enc/nfsdata  keystatus  available    -
<i><font color="silver"># If "unavailable", load the key:</font></i>
paul@f0:~ % doas zfs load-key -L file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key zdata/enc/nfsdata
paul@f0:~ % doas zfs mount zdata/enc/nfsdata
</pre>
<br />
<span>You can also verify that files are in the snapshot (not just the directory):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % ls -la /data/nfs/.zfs/snapshot/zrepl_*/
</pre>
<br />
<span>This issue commonly occurs after a reboot if the encryption keys aren&#39;t configured to load automatically.</span><br />
<br />
<h3 style='display: inline' id='configuring-automatic-key-loading-on-boot'>Configuring automatic key loading on boot</h3><br />
<br />
<span>To ensure all additional encrypted datasets are mounted automatically after reboot as well, we do:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0 - configure all encrypted datasets</font></i>
paul@f0:~ % doas sysrc zfskeys_enable=YES
zfskeys_enable: YES -&gt; YES
paul@f0:~ % doas sysrc zfskeys_datasets=<font color="#808080">"zdata/enc zdata/enc/nfsdata zroot/bhyve"</font>
zfskeys_datasets:  -&gt; zdata/enc zdata/enc/nfsdata zroot/bhyve

<i><font color="silver"># Set correct key locations for all datasets</font></i>
paul@f0:~ % doas zfs <b><u><font color="#000000">set</font></u></b> \
  keylocation=file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key zdata/enc/nfsdata

<i><font color="silver"># On f1 - include the replicated dataset</font></i>
paul@f1:~ % doas sysrc zfskeys_enable=YES
zfskeys_enable: YES -&gt; YES
paul@f1:~ % doas sysrc \
  zfskeys_datasets=<font color="#808080">"zdata/enc zroot/bhyve zdata/sink/f0/zdata/enc/nfsdata"</font>
zfskeys_datasets:  -&gt; zdata/enc zroot/bhyve zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># Set key location for replicated dataset</font></i>
paul@f1:~ % doas zfs <b><u><font color="#000000">set</font></u></b> \
  keylocation=file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
</pre>
<br />
<span>Important notes:</span><br />
<br />
<ul>
<li>Each encryption root needs its own key load entry</li>
<li>The replicated dataset on <span class='inlinecode'>f1</span> uses the same encryption key as the source on <span class='inlinecode'>f0</span></li>
<li>Always verify datasets are mounted after reboot with <span class='inlinecode'>zfs list -o name,mounted</span></li>
<li>Critical: Always ensure the replicated dataset on <span class='inlinecode'>f1</span> remains read-only with <span class='inlinecode'>doas zfs set readonly=on zdata/sink/f0/zdata/enc/nfsdata</span></li>
</ul><br />
<h3 style='display: inline' id='troubleshooting-zrepl-replication-not-working'>Troubleshooting: zrepl Replication Not Working</h3><br />
<br />
<span>If <span class='inlinecode'>zrepl</span> replication is not working, here&#39;s a systematic approach to diagnose and fix common issues:</span><br />
<br />
<h3 style='display: inline' id='check-if-zrepl-services-are-running'>Check if zrepl Services are Running</h3><br />
<br />
<span>First, verify that <span class='inlinecode'>zrepl</span> is running on both nodes:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Check service status on both f0 and f1</font></i>
paul@f0:~ % doas service zrepl status
paul@f1:~ % doas service zrepl status

<i><font color="silver"># If not running, start the service</font></i>
paul@f0:~ % doas service zrepl start
paul@f1:~ % doas service zrepl start
</pre>
<br />
<h3 style='display: inline' id='check-zrepl-status-for-errors'>Check zrepl Status for Errors</h3><br />
<br />
<span>Use the status command to see detailed error information:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Check detailed status (use --mode raw for non-tty environments)</font></i>
paul@f0:~ % doas zrepl status --mode raw

<i><font color="silver"># Look for error messages in the replication section</font></i>
<i><font color="silver"># Common errors include "no common snapshot" or connection failures</font></i>
</pre>
<br />
<h3 style='display: inline' id='fixing-no-common-snapshot-errors'>Fixing "No Common Snapshot" Errors</h3><br />
<br />
<span>This is the most common replication issue, typically occurring when:</span><br />
<br />
<ul>
<li>The receiver has existing snapshots that don&#39;t match the sender</li>
<li>Different snapshot naming schemes are in use</li>
<li>The receiver dataset was created independently</li>
</ul><br />
<span>**Error message example:**</span><br />
<pre>
no common snapshot or suitable bookmark between sender and receiver
</pre>
<br />
<span>**Solution: Clean up conflicting snapshots on receiver**</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># First, identify the destination dataset on f1</font></i>
paul@f1:~ % doas zfs list | grep sink

<i><font color="silver"># Check existing snapshots on the problematic dataset</font></i>
paul@f1:~ % doas zfs list -t snapshot | grep nfsdata

<i><font color="silver"># If you see snapshots with different naming (e.g., @daily-*, @weekly-*)</font></i>
<i><font color="silver"># these conflict with zrepl's @zrepl_* snapshots</font></i>

<i><font color="silver"># Destroy the entire destination dataset to allow clean replication</font></i>
paul@f1:~ % doas zfs destroy -r zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># For VM replication, do the same for the freebsd dataset</font></i>
paul@f1:~ % doas zfs destroy -r zdata/sink/f<font color="#000000">0</font>/zroot/bhyve/freebsd

<i><font color="silver"># Wake up zrepl to start fresh replication</font></i>
paul@f0:~ % doas zrepl signal wakeup f0_to_f1_nfsdata
paul@f0:~ % doas zrepl signal wakeup f0_to_f1_freebsd

<i><font color="silver"># Check replication status</font></i>
paul@f0:~ % doas zrepl status --mode raw
</pre>
<br />
<span>**Verification that replication is working:**</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Look for "stepping" state and active zfs send processes</font></i>
paul@f0:~ % doas zrepl status --mode raw | grep -A<font color="#000000">5</font> <font color="#808080">"State.*stepping"</font>

<i><font color="silver"># Check for active ZFS commands</font></i>
paul@f0:~ % doas zrepl status --mode raw | grep -A<font color="#000000">10</font> <font color="#808080">"ZFSCmds.*Active"</font>

<i><font color="silver"># Monitor progress - bytes replicated should be increasing</font></i>
paul@f0:~ % doas zrepl status --mode raw | grep BytesReplicated
</pre>
<br />
<h3 style='display: inline' id='network-connectivity-issues'>Network Connectivity Issues</h3><br />
<br />
<span>If replication fails to connect:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Test connectivity between nodes</font></i>
paul@f0:~ % nc -zv <font color="#000000">192.168</font>.<font color="#000000">2.131</font> <font color="#000000">8888</font>

<i><font color="silver"># Check if zrepl is listening on f1</font></i>
paul@f1:~ % doas netstat -an | grep <font color="#000000">8888</font>

<i><font color="silver"># Verify WireGuard tunnel is working</font></i>
paul@f0:~ % ping <font color="#000000">192.168</font>.<font color="#000000">2.131</font>
</pre>
<br />
<h3 style='display: inline' id='encryption-key-issues'>Encryption Key Issues</h3><br />
<br />
<span>If encrypted replication fails:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Verify encryption keys are available on both nodes</font></i>
paul@f0:~ % doas zfs get keystatus zdata/enc/nfsdata
paul@f1:~ % doas zfs get keystatus zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata

<i><font color="silver"># Load keys if unavailable</font></i>
paul@f1:~ % doas zfs load-key -L file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key \
    zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
</pre>
<br />
<h3 style='display: inline' id='monitoring-ongoing-replication'>Monitoring Ongoing Replication</h3><br />
<br />
<span>After fixing issues, monitor replication health:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Monitor replication progress (run repeatedly to check status)</font></i>
paul@f0:~ % doas zrepl status --mode raw | grep -A<font color="#000000">10</font> BytesReplicated

<i><font color="silver"># Or install watch from ports and use it</font></i>
paul@f0:~ % doas pkg install watch
paul@f0:~ % watch -n <font color="#000000">5</font> <font color="#808080">'doas zrepl status --mode raw | grep -A10 BytesReplicated'</font>

<i><font color="silver"># Check for new snapshots being created</font></i>
paul@f0:~ % doas zfs list -t snapshot | grep zrepl | tail -<font color="#000000">5</font>

<i><font color="silver"># Verify snapshots appear on receiver</font></i>
paul@f1:~ % doas zfs list -t snapshot -r zdata/sink | grep zrepl | tail -<font color="#000000">5</font>
</pre>
<br />
<span>This troubleshooting process resolves the most common <span class='inlinecode'>zrepl</span> issues and ensures continuous data replication between your storage nodes.</span><br />
<br />
<h2 style='display: inline' id='carp-common-address-redundancy-protocol'>CARP (Common Address Redundancy Protocol)</h2><br />
<br />
<span>High availability is crucial for storage systems. If the storage server goes down, all NFS clients (which will also be Kubernetes pods later on in this series) lose access to their persistent data. CARP provides a solution by creating a virtual IP address that automatically migrates to a different server during failures. This means that clients point to that VIP for NFS mounts and are always contacting the current primary node.</span><br />
<br />
<h3 style='display: inline' id='how-carp-works'>How CARP Works</h3><br />
<br />
<span>In our case, CARP allows two hosts (<span class='inlinecode'>f0</span> and <span class='inlinecode'>f1</span>) to share a virtual IP address (VIP). The hosts communicate using multicast to elect a MASTER, while the other remain as BACKUP. When the MASTER fails, the BACKUP automatically promotes itself, and the VIP is reassigned to the new MASTER. This happens within seconds.</span><br />
<br />
<span>Key benefits for our storage system:</span><br />
<br />
<ul>
<li>Automatic failover: No manual intervention is required for basic failures, although there are a few limitations. The backup will have read-only access to the available data by default, as we have already learned.</li>
<li>Transparent to clients: Pods continue using the same IP address</li>
<li>Works with <span class='inlinecode'>stunnel</span>: Behind the VIP, there will be a <span class='inlinecode'>stunnel</span> process running, which ensures encrypted connections follow the active server.</li>
</ul><br />
<a class='textlink' href='https://docs-archive.freebsd.org/doc/13.0-RELEASE/usr/local/share/doc/freebsd/en/books/handbook/carp.html'>FreeBSD CARP</a><br />
<a class='textlink' href='https://www.stunnel.org/'>Stunnel</a><br />
<br />
<h3 style='display: inline' id='configuring-carp'>Configuring CARP</h3><br />
<br />
<span>First, we add the CARP configuration to <span class='inlinecode'>/etc/rc.conf</span> on both <span class='inlinecode'>f0</span> and <span class='inlinecode'>f1</span>:</span><br />
<br />
<span class='quote'>Update: Sun 4 Jan 00:17:00 EET 2026 - Added <span class='inlinecode'>advskew 100</span> to f1 so f0 always wins CARP elections when it comes back online after a reboot.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0 - The virtual IP 192.168.1.138 will float between f0 and f1</font></i>
ifconfig_re0_alias0=<font color="#808080">"inet vhid 1 pass testpass alias 192.168.1.138/32"</font>

<i><font color="silver"># On f1 - Higher advskew means lower priority, so f0 wins elections</font></i>
ifconfig_re0_alias0=<font color="#808080">"inet vhid 1 advskew 100 pass testpass alias 192.168.1.138/32"</font>
</pre>
<br />
<span>Whereas:</span><br />
<br />
<ul>
<li><span class='inlinecode'>vhid 1</span>: Virtual Host ID - must match on all CARP members</li>
<li><span class='inlinecode'>advskew</span>: Advertisement skew - higher value means lower priority (f1 uses 100, f0 uses default 0)</li>
<li><span class='inlinecode'>pass testpass</span>: Password for CARP authentication (if you follow this, use a different password!)</li>
<li><span class='inlinecode'>alias 192.168.1.138/32</span>: The virtual IP address with a /32 netmask</li>
</ul><br />
<span>Next, update <span class='inlinecode'>/etc/hosts</span> on all nodes (<span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, <span class='inlinecode'>f2</span>, <span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>) to resolve the VIP hostname:</span><br />
<br />
<pre>
192.168.2.138 f3s-storage-ha f3s-storage-ha.wg0 f3s-storage-ha.wg0.wan.buetow.org
fd42:beef:cafe:2::138 f3s-storage-ha f3s-storage-ha.wg0 f3s-storage-ha.wg0.wan.buetow.org
</pre>
<br />
<span>This allows clients to connect to <span class='inlinecode'>f3s-storage-ha</span> regardless of which physical server is currently the MASTER.</span><br />
<br />
<h3 style='display: inline' id='carp-state-change-notifications'>CARP State Change Notifications</h3><br />
<br />
<span>To correctly manage services during failover, we need to detect CARP state changes. FreeBSD&#39;s devd system can notify us when CARP transitions between MASTER and BACKUP states.</span><br />
<br />
<span>Add this to <span class='inlinecode'>/etc/devd.conf</span> on both <span class='inlinecode'>f0</span> and <span class='inlinecode'>f1</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % cat &lt;&lt;END | doas tee -a /etc/devd.conf
notify <font color="#000000">0</font> {
        match <font color="#808080">"system"</font>          <font color="#808080">"CARP"</font>;
        match <font color="#808080">"subsystem"</font>       <font color="#808080">"[0-9]+@[0-9a-z.]+"</font>;
        match <font color="#808080">"type"</font>            <font color="#808080">"(MASTER|BACKUP)"</font>;
        action <font color="#808080">"/usr/local/bin/carpcontrol.sh $subsystem $type"</font>;
};
END

paul@f0:~ % doas service devd restart
</pre>
<br />
<span>Next, we create the CARP control script that will restart stunnel when the CARP state changes:</span><br />
<br />
<span class='quote'>Update: Fixed the script at Sat 3 Jan 23:55:11 EET 2026 - changed <span class='inlinecode'>$1</span> to <span class='inlinecode'>$2</span> because devd passes <span class='inlinecode'>$subsystem $type</span>, so the state is in the second argument.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas tee /usr/local/bin/carpcontrol.sh &lt;&lt;<font color="#808080">'EOF'</font>
<i><font color="silver">#!/bin/sh</font></i>
<i><font color="silver"># CARP state change control script</font></i>

<b><u><font color="#000000">case</font></u></b> <font color="#808080">"$2"</font> <b><u><font color="#000000">in</font></u></b>
    MASTER)
        logger <font color="#808080">"CARP state changed to MASTER, starting services"</font>
        ;;
    BACKUP)
        logger <font color="#808080">"CARP state changed to BACKUP, stopping services"</font>
        ;;
    *)
        logger <font color="#808080">"CARP state changed to $2 (unhandled)"</font>
        ;;
<b><u><font color="#000000">esac</font></u></b>
EOF

paul@f0:~ % doas chmod +x /usr/local/bin/carpcontrol.sh

<i><font color="silver"># Copy the same script to f1</font></i>
paul@f0:~ % scp /usr/local/bin/carpcontrol.sh f1:/tmp/
paul@f1:~ % doas mv /tmp/carpcontrol.sh /usr/local/bin/
paul@f1:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
</pre>
<br />
<span>Note that <span class='inlinecode'>carpcontrol.sh</span> doesn&#39;t do anything useful yet. We will provide more details (including starting and stopping services upon failover) later in this blog post.</span><br />
<br />
<span>To enable CARP in <span class='inlinecode'>/boot/loader.conf</span>, run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % echo <font color="#808080">'carp_load="YES"'</font> | doas tee -a /boot/loader.conf
carp_load=<font color="#808080">"YES"</font>
paul@f1:~ % echo <font color="#808080">'carp_load="YES"'</font> | doas tee -a /boot/loader.conf  
carp_load=<font color="#808080">"YES"</font>
</pre>
<br />
<span>Then reboot both hosts or run <span class='inlinecode'>doas kldload carp</span> to load the module immediately. </span><br />
<br />
<h2 style='display: inline' id='nfs-server-configuration'>NFS Server Configuration</h2><br />
<br />
<span>With ZFS replication in place, we can now set up NFS servers on both <span class='inlinecode'>f0</span> and <span class='inlinecode'>f1</span> to export the replicated data. Since native NFS over TLS (RFC 9289) has compatibility issues between Linux and FreeBSD (not digging into the details here, but I couldn&#39;t get it to work), we&#39;ll use stunnel to provide encryption.</span><br />
<br />
<h3 style='display: inline' id='setting-up-nfs-on-f0-primary'>Setting up NFS on <span class='inlinecode'>f0</span> (Primary)</h3><br />
<br />
<span>First, enable the NFS services in rc.conf:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas sysrc nfs_server_enable=YES
nfs_server_enable: YES -&gt; YES
paul@f0:~ % doas sysrc nfsv4_server_enable=YES
nfsv4_server_enable: YES -&gt; YES
paul@f0:~ % doas sysrc nfsuserd_enable=YES
nfsuserd_enable: YES -&gt; YES
paul@f0:~ % doas sysrc nfsuserd_flags=<font color="#808080">"-domain lan.buetow.org"</font>
nfsuserd_flags: <font color="#808080">""</font> -&gt; <font color="#808080">"-domain lan.buetow.org"</font>
paul@f0:~ % doas sysrc mountd_enable=YES
mountd_enable: NO -&gt; YES
paul@f0:~ % doas sysrc rpcbind_enable=YES
rpcbind_enable: NO -&gt; YES
</pre>
<br />
<span class='quote'>Update: 08.08.2025: I&#39;ve added the domain to <span class='inlinecode'>nfsuserd_flags</span></span><br />
<br />
<span>And we also create a dedicated directory for Kubernetes volumes:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># First, ensure the dataset is mounted</font></i>
paul@f0:~ % doas zfs get mounted zdata/enc/nfsdata
NAME               PROPERTY  VALUE    SOURCE
zdata/enc/nfsdata  mounted   yes      -

<i><font color="silver"># Create the k3svolumes directory</font></i>
paul@f0:~ % doas mkdir -p /data/nfs/k3svolumes
paul@f0:~ % doas chmod <font color="#000000">755</font> /data/nfs/k3svolumes
</pre>
<br />
<span>We also create the <span class='inlinecode'>/etc/exports</span> file. Since we&#39;re using stunnel for encryption, ALL clients must connect through stunnel, which appears as localhost (<span class='inlinecode'>127.0.0.1</span>) to the NFS server:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas tee /etc/exports &lt;&lt;<font color="#808080">'EOF'</font>
V4: /data/nfs -sec=sys
/data/nfs -alldirs -maproot=root -network <font color="#000000">127.0</font>.<font color="#000000">0.1</font> -mask <font color="#000000">255.255</font>.<font color="#000000">255.255</font>
EOF
</pre>
<br />
<span>The exports configuration:</span><br />
<br />
<ul>
<li><span class='inlinecode'>V4: /data/nfs -sec=sys</span>: Sets the NFSv4 root directory to /data/nfs</li>
<li><span class='inlinecode'>-maproot=root</span>: Maps root user from client to root on server</li>
<li><span class='inlinecode'>-network 127.0.0.1</span>: Only accepts connections from localhost (<span class='inlinecode'>stunnel</span>)</li>
</ul><br />
<span>To start the NFS services, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas service rpcbind start
Starting rpcbind.
paul@f0:~ % doas service mountd start
Starting mountd.
paul@f0:~ % doas service nfsd start
Starting nfsd.
paul@f0:~ % doas service nfsuserd start
Starting nfsuserd.
</pre>
<br />
<h3 style='display: inline' id='configuring-stunnel-for-nfs-encryption-with-carp-failover'>Configuring Stunnel for NFS Encryption with CARP Failover</h3><br />
<br />
<span>Using stunnel with client certificate authentication for NFS encryption provides several advantages:</span><br />
<br />
<ul>
<li>Compatibility: Works with any NFS version and between different operating systems</li>
<li>Strong encryption: Uses TLS/SSL with configurable cipher suites</li>
<li>Transparent: Applications don&#39;t need modification, encryption happens at the transport layer</li>
<li>Performance: Minimal overhead (~2% in benchmarks)</li>
<li>Flexibility: Can encrypt any TCP-based protocol, not just NFS</li>
<li>Strong Authentication: Client certificates provide cryptographic proof of identity</li>
<li>Access Control: Only clients with valid certificates signed by your CA can connect</li>
<li>Certificate Revocation: You can revoke access by removing certificates from the CA</li>
</ul><br />
<span>Stunnel integrates seamlessly with our CARP setup:</span><br />
<br />
<pre>
                    CARP VIP (192.168.1.138)
                           |
    f0 (MASTER) ←---------→|←---------→ f1 (BACKUP)
    stunnel:2323           |           stunnel:stopped
    nfsd:2049              |           nfsd:stopped
                           |
                    Clients connect here
</pre>
<br />
<span>The key insight is that stunnel binds to the CARP VIP. When CARP fails over, the VIP is moved to the new master, and stunnel starts there automatically. Clients maintain their connection to the same IP throughout.</span><br />
<br />
<h3 style='display: inline' id='creating-a-certificate-authority-for-client-authentication'>Creating a Certificate Authority for Client Authentication</h3><br />
<br />
<span>First, create a CA to sign both server and client certificates:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0 - Create CA</font></i>
paul@f0:~ % doas mkdir -p /usr/local/etc/stunnel/ca
paul@f0:~ % cd /usr/local/etc/stunnel/ca
paul@f0:~ % doas openssl genrsa -out ca-key.pem <font color="#000000">4096</font>
paul@f0:~ % doas openssl req -new -x<font color="#000000">509</font> -days <font color="#000000">3650</font> -key ca-key.pem -out ca-cert.pem \
  -subj <font color="#808080">'/C=US/ST=State/L=City/O=F3S Storage/CN=F3S Stunnel CA'</font>

<i><font color="silver"># Create server certificate</font></i>
paul@f0:~ % cd /usr/local/etc/stunnel
paul@f0:~ % doas openssl genrsa -out server-key.pem <font color="#000000">4096</font>
paul@f0:~ % doas openssl req -new -key server-key.pem -out server.csr \
  -subj <font color="#808080">'/C=US/ST=State/L=City/O=F3S Storage/CN=f3s-storage-ha.lan'</font>
paul@f0:~ % doas openssl x509 -req -days <font color="#000000">3650</font> -in server.csr -CA ca/ca-cert.pem \
  -CAkey ca/ca-key.pem -CAcreateserial -out server-cert.pem

<i><font color="silver"># Create client certificates for authorised clients</font></i>
paul@f0:~ % cd /usr/local/etc/stunnel/ca
paul@f0:~ % doas sh -c <font color="#808080">'for client in r0 r1 r2 earth; do </font>
<font color="#808080">  openssl genrsa -out ${client}-key.pem 4096</font>
<font color="#808080">  openssl req -new -key ${client}-key.pem -out ${client}.csr \</font>
<font color="#808080">    -subj "/C=US/ST=State/L=City/O=F3S Storage/CN=${client}.lan.buetow.org"</font>
<font color="#808080">  openssl x509 -req -days 3650 -in ${client}.csr -CA ca-cert.pem \</font>
<font color="#808080">    -CAkey ca-key.pem -CAcreateserial -out ${client}-cert.pem</font>
<font color="#808080">  # Combine cert and key into a single file for stunnel client</font>
<font color="#808080">  cat ${client}-cert.pem ${client}-key.pem &gt; ${client}-stunnel.pem</font>
<font color="#808080">done'</font>
</pre>
<br />
<h3 style='display: inline' id='install-and-configure-stunnel-on-f0'>Install and Configure Stunnel on <span class='inlinecode'>f0</span></h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Install stunnel</font></i>
paul@f0:~ % doas pkg install -y stunnel

<i><font color="silver"># Configure stunnel server with client certificate authentication</font></i>
paul@f0:~ % doas tee /usr/local/etc/stunnel/stunnel.conf &lt;&lt;<font color="#808080">'EOF'</font>
cert = /usr/local/etc/stunnel/server-cert.pem
key = /usr/local/etc/stunnel/server-key.pem

setuid = stunnel
setgid = stunnel

[nfs-tls]
accept = <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">2323</font>
connect = <font color="#000000">127.0</font>.<font color="#000000">0.1</font>:<font color="#000000">2049</font>
CAfile = /usr/local/etc/stunnel/ca/ca-cert.pem
verify = <font color="#000000">2</font>
requireCert = yes
EOF

<i><font color="silver"># Enable and start stunnel</font></i>
paul@f0:~ % doas sysrc stunnel_enable=YES
stunnel_enable:  -&gt; YES
paul@f0:~ % doas service stunnel start
Starting stunnel.

<i><font color="silver"># Restart stunnel to apply the CARP VIP binding</font></i>
paul@f0:~ % doas service stunnel restart
Stopping stunnel.
Starting stunnel.
</pre>
<br />
<span>The configuration includes:</span><br />
<br />
<ul>
<li><span class='inlinecode'>verify = 2</span>: Verify client certificate and fail if not provided</li>
<li><span class='inlinecode'>requireCert = yes</span>: Client must present a valid certificate</li>
<li><span class='inlinecode'>CAfile</span>: Path to the CA certificate that signed the client certificates</li>
</ul><br />
<h3 style='display: inline' id='setting-up-nfs-on-f1-standby'>Setting up NFS on <span class='inlinecode'>f1</span> (Standby)</h3><br />
<br />
<span>Repeat the same configuration on <span class='inlinecode'>f1</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:~ % doas sysrc nfs_server_enable=YES
nfs_server_enable: NO -&gt; YES
paul@f1:~ % doas sysrc nfsv4_server_enable=YES
nfsv4_server_enable: NO -&gt; YES
paul@f1:~ % doas sysrc nfsuserd_enable=YES
nfsuserd_enable: NO -&gt; YES
paul@f1:~ % doas sysrc mountd_enable=YES
mountd_enable: NO -&gt; YES
paul@f1:~ % doas sysrc rpcbind_enable=YES
rpcbind_enable: NO -&gt; YES

paul@f1:~ % doas tee /etc/exports &lt;&lt;<font color="#808080">'EOF'</font>
V4: /data/nfs -sec=sys
/data/nfs -alldirs -maproot=root -network <font color="#000000">127.0</font>.<font color="#000000">0.1</font> -mask <font color="#000000">255.255</font>.<font color="#000000">255.255</font>
EOF

paul@f1:~ % doas service rpcbind start
Starting rpcbind.
paul@f1:~ % doas service mountd start
Starting mountd.
paul@f1:~ % doas service nfsd start
Starting nfsd.
paul@f1:~ % doas service nfsuserd start
Starting nfsuserd.
</pre>
<br />
<span>And to configure stunnel on <span class='inlinecode'>f1</span>, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Install stunnel</font></i>
paul@f1:~ % doas pkg install -y stunnel

<i><font color="silver"># Copy certificates from f0</font></i>
paul@f0:~ % doas tar -cf /tmp/stunnel-certs.tar \
  -C /usr/local/etc/stunnel server-cert.pem server-key.pem ca
paul@f0:~ % scp /tmp/stunnel-certs.tar f1:/tmp/

paul@f1:~ % cd /usr/local/etc/stunnel &amp;&amp; doas tar -xf /tmp/stunnel-certs.tar

<i><font color="silver"># Configure stunnel server on f1 with client certificate authentication</font></i>
paul@f1:~ % doas tee /usr/local/etc/stunnel/stunnel.conf &lt;&lt;<font color="#808080">'EOF'</font>
cert = /usr/local/etc/stunnel/server-cert.pem
key = /usr/local/etc/stunnel/server-key.pem

setuid = stunnel
setgid = stunnel

[nfs-tls]
accept = <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">2323</font>
connect = <font color="#000000">127.0</font>.<font color="#000000">0.1</font>:<font color="#000000">2049</font>
CAfile = /usr/local/etc/stunnel/ca/ca-cert.pem
verify = <font color="#000000">2</font>
requireCert = yes
EOF

<i><font color="silver"># Enable and start stunnel</font></i>
paul@f1:~ % doas sysrc stunnel_enable=YES
stunnel_enable:  -&gt; YES
paul@f1:~ % doas service stunnel start
Starting stunnel.

<i><font color="silver"># Restart stunnel to apply the CARP VIP binding</font></i>
paul@f1:~ % doas service stunnel restart
Stopping stunnel.
Starting stunnel.
</pre>
<br />
<h3 style='display: inline' id='carp-control-script-for-clean-failover'>CARP Control Script for Clean Failover</h3><br />
<br />
<span>With stunnel configured to bind to the CARP VIP (192.168.1.138), only the server that is currently the CARP MASTER will accept stunnel connections. This provides automatic failover for encrypted NFS:</span><br />
<br />
<ul>
<li>When <span class='inlinecode'>f0</span> is CARP MASTER: stunnel on <span class='inlinecode'>f0</span> accepts connections on <span class='inlinecode'>192.168.1.138:2323</span></li>
<li>When <span class='inlinecode'>f1</span> becomes CARP MASTER: stunnel on <span class='inlinecode'>f1</span> starts accepting connections on <span class='inlinecode'>192.168.1.138:2323</span></li>
<li>The backup server&#39;s stunnel process will fail to bind to the VIP and won&#39;t accept connections</li>
</ul><br />
<span>This ensures that clients always connect to the active NFS server through the CARP VIP. To ensure clean failover behaviour and prevent stale file handles, we&#39;ll update our <span class='inlinecode'>carpcontrol.sh</span> script so that:</span><br />
<br />
<ul>
<li>Stops NFS services on BACKUP nodes (preventing split-brain scenarios)</li>
<li>Starts NFS services only on the MASTER node</li>
<li>Manages stunnel binding to the CARP VIP</li>
</ul><br />
<span>This approach ensures clients can only connect to the active server, eliminating stale handles from the inactive server:</span><br />
<br />
<span class='quote'>Update: Fixed the script at Sat 3 Jan 23:55:11 EET 2026 - changed <span class='inlinecode'>$1</span> to <span class='inlinecode'>$2</span> because devd passes <span class='inlinecode'>$subsystem $type</span>, so the state is in the second argument.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Create CARP control script on both f0 and f1</font></i>
paul@f0:~ % doas tee /usr/local/bin/carpcontrol.sh &lt;&lt;<font color="#808080">'EOF'</font>
<i><font color="silver">#!/bin/sh</font></i>
<i><font color="silver"># CARP state change control script</font></i>

HOSTNAME=`hostname`

<b><u><font color="#000000">if</font></u></b> [ ! -f /data/nfs/nfs.DO_NOT_REMOVE ]; <b><u><font color="#000000">then</font></u></b>
    logger <font color="#808080">'/data/nfs not mounted, mounting it now!'</font>
    <b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$HOSTNAME"</font> = <font color="#808080">'f0.lan.buetow.org'</font> ]; <b><u><font color="#000000">then</font></u></b>
        zfs load-key -L file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key zdata/enc/nfsdata
        zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/data/nfs zdata/enc/nfsdata
    <b><u><font color="#000000">else</font></u></b>
        zfs load-key -L file:///keys/f<font color="#000000">0</font>.lan.buetow.org:zdata.key zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
        zfs <b><u><font color="#000000">set</font></u></b> mountpoint=/data/nfs zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
        zfs mount zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
        zfs <b><u><font color="#000000">set</font></u></b> <b><u><font color="#000000">readonly</font></u></b>=on zdata/sink/f<font color="#000000">0</font>/zdata/enc/nfsdata
    <b><u><font color="#000000">fi</font></u></b>
    service nfsd stop <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
    service mountd stop <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
<b><u><font color="#000000">fi</font></u></b>


<b><u><font color="#000000">case</font></u></b> <font color="#808080">"$2"</font> <b><u><font color="#000000">in</font></u></b>
    MASTER)
        logger <font color="#808080">"CARP state changed to MASTER, starting services"</font>
        service rpcbind start &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service mountd start &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service nfsd start &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service nfsuserd start &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service stunnel restart &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        logger <font color="#808080">"CARP MASTER: NFS and stunnel services started"</font>
        ;;
    BACKUP)
        logger <font color="#808080">"CARP state changed to BACKUP, stopping services"</font>
        service stunnel stop &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service nfsd stop &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service mountd stop &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        service nfsuserd stop &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
        logger <font color="#808080">"CARP BACKUP: NFS and stunnel services stopped"</font>
        ;;
    *)
        logger <font color="#808080">"CARP state changed to $2 (unhandled)"</font>
        ;;
<b><u><font color="#000000">esac</font></u></b>
EOF

paul@f0:~ % doas chmod +x /usr/local/bin/carpcontrol.sh
</pre>
<br />
<h3 style='display: inline' id='carp-management-script'>CARP Management Script</h3><br />
<br />
<span>To simplify CARP state management and failover testing, create this helper script on both <span class='inlinecode'>f0</span> and <span class='inlinecode'>f1</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Create the CARP management script</font></i>
paul@f0:~ % doas tee /usr/local/bin/carp &lt;&lt;<font color="#808080">'EOF'</font>
<i><font color="silver">#!/bin/sh</font></i>
<i><font color="silver"># CARP state management script</font></i>
<i><font color="silver"># Usage: carp [master|backup|auto-failback enable|auto-failback disable]</font></i>
<i><font color="silver"># Without arguments: shows current state</font></i>

<i><font color="silver"># Find the interface with CARP configured</font></i>
CARP_IF=$(ifconfig -l | xargs -n<font color="#000000">1</font> | <b><u><font color="#000000">while</font></u></b> <b><u><font color="#000000">read</font></u></b> <b><u><font color="#000000">if</font></u></b>; <b><u><font color="#000000">do</font></u></b>
    ifconfig <font color="#808080">"$if"</font> <font color="#000000">2</font>&gt;/dev/null | grep -q <font color="#808080">"carp:"</font> &amp;&amp; echo <font color="#808080">"$if"</font> &amp;&amp; <b><u><font color="#000000">break</font></u></b>
<b><u><font color="#000000">done</font></u></b>)

<b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$CARP_IF"</font> ]; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"Error: No CARP interface found"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># Get CARP VHID</font></i>
VHID=$(ifconfig <font color="#808080">"$CARP_IF"</font> | grep <font color="#808080">"carp:"</font> | sed -n <font color="#808080">'s/.*vhid </font>\(<font color="#808080">[0-9]*</font>\)<font color="#808080">.*/</font>\1<font color="#808080">/p'</font>)

<b><u><font color="#000000">if</font></u></b> [ -z <font color="#808080">"$VHID"</font> ]; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"Error: Could not determine CARP VHID"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># Function to get the current state</font></i>
get_state() {
    ifconfig <font color="#808080">"$CARP_IF"</font> | grep <font color="#808080">"carp:"</font> | awk <font color="#808080">'{print $2}'</font>
}

<i><font color="silver"># Check for auto-failback block file</font></i>
BLOCK_FILE=<font color="#808080">"/data/nfs/nfs.NO_AUTO_FAILBACK"</font>
check_auto_failback() {
    <b><u><font color="#000000">if</font></u></b> [ -f <font color="#808080">"$BLOCK_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
        echo <font color="#808080">"WARNING: Auto-failback is DISABLED (file exists: $BLOCK_FILE)"</font>
    <b><u><font color="#000000">fi</font></u></b>
}

<i><font color="silver"># Main logic</font></i>
<b><u><font color="#000000">case</font></u></b> <font color="#808080">"$1"</font> <b><u><font color="#000000">in</font></u></b>
    <font color="#808080">""</font>)
        <i><font color="silver"># No argument - show current state</font></i>
        STATE=$(get_state)
        echo <font color="#808080">"CARP state on $CARP_IF (vhid $VHID): $STATE"</font>
        check_auto_failback
        ;;
    master)
        <i><font color="silver"># Force to MASTER state</font></i>
        echo <font color="#808080">"Setting CARP to MASTER state..."</font>
        ifconfig <font color="#808080">"$CARP_IF"</font> vhid <font color="#808080">"$VHID"</font> state master
        sleep <font color="#000000">1</font>
        STATE=$(get_state)
        echo <font color="#808080">"CARP state on $CARP_IF (vhid $VHID): $STATE"</font>
        check_auto_failback
        ;;
    backup)
        <i><font color="silver"># Force to BACKUP state</font></i>
        echo <font color="#808080">"Setting CARP to BACKUP state..."</font>
        ifconfig <font color="#808080">"$CARP_IF"</font> vhid <font color="#808080">"$VHID"</font> state backup
        sleep <font color="#000000">1</font>
        STATE=$(get_state)
        echo <font color="#808080">"CARP state on $CARP_IF (vhid $VHID): $STATE"</font>
        check_auto_failback
        ;;
    auto-failback)
        <b><u><font color="#000000">case</font></u></b> <font color="#808080">"$2"</font> <b><u><font color="#000000">in</font></u></b>
            <b><u><font color="#000000">enable</font></u></b>)
                <b><u><font color="#000000">if</font></u></b> [ -f <font color="#808080">"$BLOCK_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
                    rm <font color="#808080">"$BLOCK_FILE"</font>
                    echo <font color="#808080">"Auto-failback ENABLED (removed $BLOCK_FILE)"</font>
                <b><u><font color="#000000">else</font></u></b>
                    echo <font color="#808080">"Auto-failback was already enabled"</font>
                <b><u><font color="#000000">fi</font></u></b>
                ;;
            disable)
                <b><u><font color="#000000">if</font></u></b> [ ! -f <font color="#808080">"$BLOCK_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
                    touch <font color="#808080">"$BLOCK_FILE"</font>
                    echo <font color="#808080">"Auto-failback DISABLED (created $BLOCK_FILE)"</font>
                <b><u><font color="#000000">else</font></u></b>
                    echo <font color="#808080">"Auto-failback was already disabled"</font>
                <b><u><font color="#000000">fi</font></u></b>
                ;;
            *)
                echo <font color="#808080">"Usage: $0 auto-failback [enable|disable]"</font>
                echo <font color="#808080">"  enable:  Remove block file to allow automatic failback"</font>
                echo <font color="#808080">"  disable: Create block file to prevent automatic failback"</font>
                <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
                ;;
        <b><u><font color="#000000">esac</font></u></b>
        ;;
    *)
        echo <font color="#808080">"Usage: $0 [master|backup|auto-failback enable|auto-failback disable]"</font>
        echo <font color="#808080">"  Without arguments: show current CARP state"</font>
        echo <font color="#808080">"  master: force this node to become CARP MASTER"</font>
        echo <font color="#808080">"  backup: force this node to become CARP BACKUP"</font>
        echo <font color="#808080">"  auto-failback enable:  allow automatic failback to f0"</font>
        echo <font color="#808080">"  auto-failback disable: prevent automatic failback to f0"</font>
        <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
        ;;
<b><u><font color="#000000">esac</font></u></b>
EOF

paul@f0:~ % doas chmod +x /usr/local/bin/carp

<i><font color="silver"># Copy to f1 as well</font></i>
paul@f0:~ % scp /usr/local/bin/carp f1:/tmp/
paul@f1:~ % doas cp /tmp/carp /usr/local/bin/carp &amp;&amp; doas chmod +x /usr/local/bin/carp
</pre>
<br />
<span>Now you can easily manage CARP states and auto-failback:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Check current CARP state</font></i>
paul@f0:~ % doas carp
CARP state on re0 (vhid <font color="#000000">1</font>): MASTER

<i><font color="silver"># If auto-failback is disabled, you'll see a warning</font></i>
paul@f0:~ % doas carp
CARP state on re0 (vhid <font color="#000000">1</font>): MASTER
WARNING: Auto-failback is DISABLED (file exists: /data/nfs/nfs.NO_AUTO_FAILBACK)

<i><font color="silver"># Force f0 to become BACKUP (triggers failover to f1)</font></i>
paul@f0:~ % doas carp backup
Setting CARP to BACKUP state...
CARP state on re0 (vhid <font color="#000000">1</font>): BACKUP

<i><font color="silver"># Disable auto-failback (useful for maintenance)</font></i>
paul@f0:~ % doas carp auto-failback disable
Auto-failback DISABLED (created /data/nfs/nfs.NO_AUTO_FAILBACK)

<i><font color="silver"># Enable auto-failback</font></i>
paul@f0:~ % doas carp auto-failback <b><u><font color="#000000">enable</font></u></b>
Auto-failback ENABLED (removed /data/nfs/nfs.NO_AUTO_FAILBACK)
</pre>
<br />
<h3 style='display: inline' id='automatic-failback-after-reboot'>Automatic Failback After Reboot</h3><br />
<br />
<span>When <span class='inlinecode'>f0</span> reboots (planned or unplanned), <span class='inlinecode'>f1</span> takes over as CARP MASTER. To ensure <span class='inlinecode'>f0</span> automatically reclaims its primary role once it&#39;s fully operational, we&#39;ll implement an automatic failback mechanism. With:</span><br />
<br />
<span class='quote'>Update: Fixed the script at Sun 4 Jan 00:04:28 EET 2026 - removed the NFS service check because when f0 is BACKUP, NFS services are intentionally stopped by carpcontrol.sh, which would prevent auto-failback from ever triggering.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas tee /usr/local/bin/carp-auto-failback.sh &lt;&lt;<font color="#808080">'EOF'</font>
<i><font color="silver">#!/bin/sh</font></i>
<i><font color="silver"># CARP automatic failback script for f0</font></i>
<i><font color="silver"># Ensures f0 reclaims MASTER role after reboot when storage is ready</font></i>

LOGFILE=<font color="#808080">"/var/log/carp-auto-failback.log"</font>
MARKER_FILE=<font color="#808080">"/data/nfs/nfs.DO_NOT_REMOVE"</font>
BLOCK_FILE=<font color="#808080">"/data/nfs/nfs.NO_AUTO_FAILBACK"</font>

log_message() {
    echo <font color="#808080">"$(date '+%Y-%m-%d %H:%M:%S') - $1"</font> &gt;&gt; <font color="#808080">"$LOGFILE"</font>
}

<i><font color="silver"># Check if we're already MASTER</font></i>
CURRENT_STATE=$(/usr/local/bin/carp | awk <font color="#808080">'{print $NF}'</font>)
<b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$CURRENT_STATE"</font> = <font color="#808080">"MASTER"</font> ]; <b><u><font color="#000000">then</font></u></b>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># Check if /data/nfs is mounted</font></i>
<b><u><font color="#000000">if</font></u></b> ! mount | grep -q <font color="#808080">"on /data/nfs "</font>; <b><u><font color="#000000">then</font></u></b>
    log_message <font color="#808080">"SKIP: /data/nfs not mounted"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># Check if the marker file exists</font></i>
<i><font color="silver"># (identifies that the ZFS data set is properly mounted)</font></i>
<b><u><font color="#000000">if</font></u></b> [ ! -f <font color="#808080">"$MARKER_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log_message <font color="#808080">"SKIP: Marker file $MARKER_FILE not found"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># Check if failback is blocked (for maintenance)</font></i>
<b><u><font color="#000000">if</font></u></b> [ -f <font color="#808080">"$BLOCK_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
    log_message <font color="#808080">"SKIP: Failback blocked by $BLOCK_FILE"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># All conditions met - promote to MASTER</font></i>
log_message <font color="#808080">"CONDITIONS MET: Promoting to MASTER (was $CURRENT_STATE)"</font>
/usr/local/bin/carp master

<i><font color="silver"># Log result</font></i>
sleep <font color="#000000">2</font>
NEW_STATE=$(/usr/local/bin/carp | awk <font color="#808080">'{print $NF}'</font>)
log_message <font color="#808080">"Failback complete: State is now $NEW_STATE"</font>

<i><font color="silver"># If successful, log to the system log too</font></i>
<b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$NEW_STATE"</font> = <font color="#808080">"MASTER"</font> ]; <b><u><font color="#000000">then</font></u></b>
    logger <font color="#808080">"CARP: f0 automatically reclaimed MASTER role"</font>
<b><u><font color="#000000">fi</font></u></b>
EOF

paul@f0:~ % doas chmod +x /usr/local/bin/carp-auto-failback.sh
</pre>
<br />
<span>The marker file identifies that the ZFS data set is mounted correctly. We create it with:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas touch /data/nfs/nfs.DO_NOT_REMOVE
</pre>
<br />
<span>We add a cron job to check every minute:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % echo <font color="#808080">"* * * * * /usr/local/bin/carp-auto-failback.sh"</font> | doas crontab -
</pre>
<br />
<span>The enhanced CARP script provides integrated control over auto-failback. To temporarily turn off automatic failback (e.g., for <span class='inlinecode'>f0</span> maintenance), we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas carp auto-failback disable
Auto-failback DISABLED (created /data/nfs/nfs.NO_AUTO_FAILBACK)
</pre>
<br />
<span>And to re-enable it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas carp auto-failback <b><u><font color="#000000">enable</font></u></b>
Auto-failback ENABLED (removed /data/nfs/nfs.NO_AUTO_FAILBACK)
</pre>
<br />
<span>To check whether auto-failback is enabled, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas carp
CARP state on re0 (vhid <font color="#000000">1</font>): MASTER
<i><font color="silver"># If disabled, you'll see: WARNING: Auto-failback is DISABLED</font></i>
</pre>
<br />
<span>The failback attempts are logged to <span class='inlinecode'>/var/log/carp-auto-failback.log</span>!</span><br />
<br />
<span>So, in summary:</span><br />
<br />
<ul>
<li>After f<span class='inlinecode'>0 </span>reboots: <span class='inlinecode'>f1</span> is MASTER, f<span class='inlinecode'>0 </span>boots as BACKUP</li>
<li>Cron runs every minute: Checks if conditions are met (Is <span class='inlinecode'>f0</span> currently BACKUP? (don&#39;t run if already MASTER)), (Is /data/nfs mounted? (ZFS datasets are ready)), (Does marker file exist? (confirms this is primary storage)), (Is failback blocked? (admin can prevent failback)), (Are NFS services running? (system is fully ready))</li>
<li>Failback occurs: Typically 2-3 minutes after boot completes</li>
<li>Logging: All attempts logged for troubleshooting</li>
</ul><br />
<span>This ensures <span class='inlinecode'>f0</span> automatically resumes its role as primary storage server after any reboot, while providing administrative control when needed.</span><br />
<br />
<h2 style='display: inline' id='client-configuration-for-nfs-via-stunnel'>Client Configuration for NFS via Stunnel</h2><br />
<br />
<span>To mount NFS shares with stunnel encryption, clients must install and configure stunnel using their client certificates.</span><br />
<br />
<h3 style='display: inline' id='configuring-rocky-linux-clients-r0-r1-r2'>Configuring Rocky Linux Clients (<span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, <span class='inlinecode'>r2</span>)</h3><br />
<br />
<span>On the Rocky Linux VMs, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Install stunnel on client (example for `r0`)</font></i>
[root@r0 ~]<i><font color="silver"># dnf install -y stunnel nfs-utils</font></i>

<i><font color="silver"># Copy client certificate and CA certificate from f0</font></i>
[root@r0 ~]<i><font color="silver"># scp f0:/usr/local/etc/stunnel/ca/r0-stunnel.pem /etc/stunnel/</font></i>
[root@r0 ~]<i><font color="silver"># scp f0:/usr/local/etc/stunnel/ca/ca-cert.pem /etc/stunnel/</font></i>

<i><font color="silver"># Configure stunnel client with certificate authentication</font></i>
[root@r0 ~]<i><font color="silver"># tee /etc/stunnel/stunnel.conf &lt;&lt;'EOF'</font></i>
cert = /etc/stunnel/r<font color="#000000">0</font>-stunnel.pem
CAfile = /etc/stunnel/ca-cert.pem
client = yes
verify = <font color="#000000">2</font>

[nfs-ha]
accept = <font color="#000000">127.0</font>.<font color="#000000">0.1</font>:<font color="#000000">2323</font>
connect = <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">2323</font>
EOF

<i><font color="silver"># Enable and start stunnel</font></i>
[root@r0 ~]<i><font color="silver"># systemctl enable --now stunnel</font></i>

<i><font color="silver"># Repeat for r1 and r2 with their respective certificates</font></i>
</pre>
<br />
<span>Note: Each client must use its certificate file (<span class='inlinecode'>r0-stunnel.pem</span>, <span class='inlinecode'>r1-stunnel.pem</span>, <span class='inlinecode'>r2-stunnel.pem</span>, or <span class='inlinecode'>earth-stunnel.pem</span> - the latter is for my Laptop, which can also mount the NFS shares).</span><br />
<br />
<h3 style='display: inline' id='nfsv4-user-mapping-config-on-rocky'>NFSv4 user mapping config on Rocky</h3><br />
<br />
<span class='quote'>Update: This section was added 08.08.2025!</span><br />
<br />
<span>For this, we need to set the <span class='inlinecode'>Domain</span> in <span class='inlinecode'>/etc/idmapd.conf</span> on all 3 Rocky hosts to <span class='inlinecode'>lan.buetow.org</span> (remember, earlier in this blog post we set the <span class='inlinecode'>nfsuserd</span> domain on the NFS server side to <span class='inlinecode'>lan.buetow.org</span> as well!)</span><br />
<br />
<pre>
[General]

Domain = lan.buetow.org
.
.
.
</pre>
<br />
<span>We also need to increase the inotify limit, otherwise nfs-idmapd may fail to start with "Too many open files":</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># echo 'fs.inotify.max_user_instances = 512' &gt; /etc/sysctl.d/99-inotify.conf</font></i>
[root@r0 ~]<i><font color="silver"># sysctl -w fs.inotify.max_user_instances=512</font></i>
</pre>
<br />
<span>And afterwards, we need to run the following on all 3 Rocky hosts:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># systemctl start nfs-idmapd</font></i>
[root@r0 ~]<i><font color="silver"># systemctl enable --now nfs-client.target</font></i>
</pre>
<br />
<span>and then, safest, reboot those.</span><br />
<br />
<h3 style='display: inline' id='testing-nfs-mount-with-stunnel'>Testing NFS Mount with Stunnel</h3><br />
<br />
<span>To mount NFS through the stunnel encrypted tunnel, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Create a mount point</font></i>
[root@r0 ~]<i><font color="silver"># mkdir -p /data/nfs/k3svolumes</font></i>

<i><font color="silver"># Mount through stunnel (using localhost and NFSv4)</font></i>
[root@r0 ~]<i><font color="silver"># mount -t nfs4 -o port=2323 127.0.0.1:/k3svolumes /data/nfs/k3svolumes</font></i>

<i><font color="silver"># Verify mount</font></i>
[root@r0 ~]<i><font color="silver"># mount | grep k3svolumes</font></i>
<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/k3svolumes on /data/nfs/k3svolumes 
  <b><u><font color="#000000">type</font></u></b> nfs4 (rw,relatime,vers=<font color="#000000">4.2</font>,rsize=<font color="#000000">131072</font>,wsize=<font color="#000000">131072</font>,
  namlen=<font color="#000000">255</font>,hard,proto=tcp,port=<font color="#000000">2323</font>,timeo=<font color="#000000">600</font>,retrans=<font color="#000000">2</font>,sec=sys,
  clientaddr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>,local_lock=none,addr=<font color="#000000">127.0</font>.<font color="#000000">0.1</font>)

<i><font color="silver"># For persistent mount, add to /etc/fstab:</font></i>
<font color="#000000">127.0</font>.<font color="#000000">0.1</font>:/k3svolumes /data/nfs/k3svolumes nfs4 port=<font color="#000000">2323</font>,_netdev,soft,timeo=<font color="#000000">10</font>,retrans=<font color="#000000">2</font>,intr <font color="#000000">0</font> <font color="#000000">0</font>
</pre>
<br />
<span>Note: The mount uses localhost (<span class='inlinecode'>127.0.0.1</span>) because stunnel is listening locally and forwarding the encrypted traffic to the remote server.</span><br />
<br />
<h3 style='display: inline' id='testing-carp-failover-with-mounted-clients-and-stale-file-handles'>Testing CARP Failover with mounted clients and stale file handles:</h3><br />
<br />
<span>To test the failover process:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># On f0 (current MASTER) - trigger failover</font></i>
paul@f0:~ % doas ifconfig re0 vhid <font color="#000000">1</font> state backup

<i><font color="silver"># On f1 - verify it becomes MASTER</font></i>
paul@f1:~ % ifconfig re0 | grep carp
    inet <font color="#000000">192.168</font>.<font color="#000000">1.138</font> netmask <font color="#000000">0xffffffff</font> broadcast <font color="#000000">192.168</font>.<font color="#000000">1.138</font> vhid <font color="#000000">1</font>

<i><font color="silver"># Check stunnel is now listening on f1</font></i>
paul@f1:~ % doas sockstat -l | grep <font color="#000000">2323</font>
stunnel  stunnel    <font color="#000000">4567</font>  <font color="#000000">3</font>  tcp4   <font color="#000000">192.168</font>.<font color="#000000">1.138</font>:<font color="#000000">2323</font>    *:*

<i><font color="silver"># On client - verify NFS mount still works</font></i>
[root@r0 ~]<i><font color="silver"># ls /data/nfs/k3svolumes/</font></i>
[root@r0 ~]<i><font color="silver"># echo "Test after failover" &gt; /data/nfs/k3svolumes/failover-test.txt</font></i>
</pre>
<br />
<span>After a CARP failover, NFS clients may experience "Stale file handle" errors because they cached file handles from the previous server. To resolve this manually, we can run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Force unmount and remount</font></i>
[root@r0 ~]<i><font color="silver"># umount -f /data/nfs/k3svolumes</font></i>
[root@r0 ~]<i><font color="silver"># mount /data/nfs/k3svolumes</font></i>
</pre>
<br />
<span>For the automatic recovery, we create a script:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># cat &gt; /usr/local/bin/check-nfs-mount.sh &lt;&lt; 'EOF'</font></i>
<i><font color="silver">#!/bin/bash</font></i>
<i><font color="silver"># Fast NFS mount health monitor - runs every 10 seconds via systemd timer</font></i>

MOUNT_POINT=<font color="#808080">"/data/nfs/k3svolumes"</font>
LOCK_FILE=<font color="#808080">"/var/run/nfs-mount-check.lock"</font>

<i><font color="silver"># Use a lock file to prevent concurrent runs</font></i>
<b><u><font color="#000000">if</font></u></b> [ -f <font color="#808080">"$LOCK_FILE"</font> ]; <b><u><font color="#000000">then</font></u></b>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
<b><u><font color="#000000">fi</font></u></b>
touch <font color="#808080">"$LOCK_FILE"</font>
<b><u><font color="#000000">trap</font></u></b> <font color="#808080">"rm -f $LOCK_FILE"</font> EXIT

MOUNT_FIXED=<font color="#000000">0</font>

fix_mount () {
    echo <font color="#808080">"Attempting to remount NFS mount $MOUNT_POINT"</font>
    <b><u><font color="#000000">if</font></u></b> mount -o remount -f <font color="#808080">"$MOUNT_POINT"</font> <font color="#000000">2</font>&gt;/dev/null; <b><u><font color="#000000">then</font></u></b>
        echo <font color="#808080">"Remount command issued for $MOUNT_POINT"</font>
    <b><u><font color="#000000">else</font></u></b>
        echo <font color="#808080">"Failed to remount NFS mount $MOUNT_POINT"</font>
    <b><u><font color="#000000">fi</font></u></b>

    echo <font color="#808080">"Checking if $MOUNT_POINT is a mountpoint"</font>
    <b><u><font color="#000000">if</font></u></b> mountpoint <font color="#808080">"$MOUNT_POINT"</font> &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>; <b><u><font color="#000000">then</font></u></b>
        echo <font color="#808080">"$MOUNT_POINT is a valid mountpoint"</font>
    <b><u><font color="#000000">else</font></u></b>
        echo <font color="#808080">"$MOUNT_POINT is not a valid mountpoint, attempting mount"</font>
        <b><u><font color="#000000">if</font></u></b> mount <font color="#808080">"$MOUNT_POINT"</font>; <b><u><font color="#000000">then</font></u></b>
            echo <font color="#808080">"Successfully mounted $MOUNT_POINT"</font>
            MOUNT_FIXED=<font color="#000000">1</font>
            <b><u><font color="#000000">return</font></u></b>
        <b><u><font color="#000000">else</font></u></b>
            echo <font color="#808080">"Failed to mount $MOUNT_POINT"</font>
        <b><u><font color="#000000">fi</font></u></b>
    <b><u><font color="#000000">fi</font></u></b>

    echo <font color="#808080">"Attempting to unmount $MOUNT_POINT"</font>
    <b><u><font color="#000000">if</font></u></b> umount -f <font color="#808080">"$MOUNT_POINT"</font> <font color="#000000">2</font>&gt;/dev/null; <b><u><font color="#000000">then</font></u></b>
        echo <font color="#808080">"Successfully unmounted $MOUNT_POINT"</font>
    <b><u><font color="#000000">else</font></u></b>
        echo <font color="#808080">"Failed to unmount $MOUNT_POINT (it might not be mounted)"</font>
    <b><u><font color="#000000">fi</font></u></b>

    echo <font color="#808080">"Attempting to mount $MOUNT_POINT"</font>
    <b><u><font color="#000000">if</font></u></b> mount <font color="#808080">"$MOUNT_POINT"</font>; <b><u><font color="#000000">then</font></u></b>
        echo <font color="#808080">"NFS mount $MOUNT_POINT mounted successfully"</font>
        MOUNT_FIXED=<font color="#000000">1</font>
        <b><u><font color="#000000">return</font></u></b>
    <b><u><font color="#000000">else</font></u></b>
        echo <font color="#808080">"Failed to mount NFS mount $MOUNT_POINT"</font>
    <b><u><font color="#000000">fi</font></u></b>

    echo <font color="#808080">"Failed to fix NFS mount $MOUNT_POINT"</font>
    <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
}

<b><u><font color="#000000">if</font></u></b> ! mountpoint <font color="#808080">"$MOUNT_POINT"</font> &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"NFS mount $MOUNT_POINT not found"</font>
    fix_mount
<b><u><font color="#000000">fi</font></u></b>

<b><u><font color="#000000">if</font></u></b> ! timeout 2s stat <font color="#808080">"$MOUNT_POINT"</font> &gt;/dev/null <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"NFS mount $MOUNT_POINT appears to be unresponsive"</font>
    fix_mount
<b><u><font color="#000000">fi</font></u></b>

<i><font color="silver"># After a successful remount, delete pods stuck on this node</font></i>
<b><u><font color="#000000">if</font></u></b> [ <font color="#808080">"$MOUNT_FIXED"</font> -eq <font color="#000000">1</font> ]; <b><u><font color="#000000">then</font></u></b>
    echo <font color="#808080">"Mount was fixed, checking for stuck pods on this node..."</font>
    NODE=$(hostname)
    <b><u><font color="#000000">export</font></u></b> KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    kubectl get pods --all-namespaces \
      --field-selector=<font color="#808080">"spec.nodeName=$NODE"</font> \
      -o json <font color="#000000">2</font>&gt;/dev/null | jq -r <font color="#808080">'</font>
<font color="#808080">        .items[] |</font>
<font color="#808080">        select(</font>
<font color="#808080">          .status.phase == "Unknown" or</font>
<font color="#808080">          .status.phase == "Pending" or</font>
<font color="#808080">          (.status.conditions // [] |</font>
<font color="#808080">            any(.type == "Ready" and .status == "False")) or</font>
<font color="#808080">          (.status.containerStatuses // [] |</font>
<font color="#808080">            any(.state.waiting.reason == "ContainerCreating"))</font>
<font color="#808080">        ) | "</font>\(<font color="#808080">.metadata.namespace) </font>\(<font color="#808080">.metadata.name)"'</font> | \
      <b><u><font color="#000000">while</font></u></b> <b><u><font color="#000000">read</font></u></b> ns pod; <b><u><font color="#000000">do</font></u></b>
        echo <font color="#808080">"Deleting stuck pod $ns/$pod"</font>
        kubectl delete pod -n <font color="#808080">"$ns"</font> <font color="#808080">"$pod"</font> \
          --grace-period=<font color="#000000">0</font> --force <font color="#000000">2</font>&gt;&amp;<font color="#000000">1</font>
      <b><u><font color="#000000">done</font></u></b>
<b><u><font color="#000000">fi</font></u></b>
EOF

[root@r0 ~]<i><font color="silver"># chmod +x /usr/local/bin/check-nfs-mount.sh</font></i>
</pre>
<br />
<span>And we create the systemd service as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># cat &gt; /etc/systemd/system/nfs-mount-monitor.service &lt;&lt; 'EOF'</font></i>
[Unit]
Description=NFS Mount Health Monitor
After=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/check-nfs-mount.sh
StandardOutput=journal
StandardError=journal
EOF
</pre>
<br />
<span>And we also create the systemd timer (runs every 10 seconds):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># cat &gt; /etc/systemd/system/nfs-mount-monitor.timer &lt;&lt; 'EOF'</font></i>
[Unit]
Description=Run NFS Mount Health Monitor every <font color="#000000">10</font> seconds
Requires=nfs-mount-monitor.service

[Timer]
OnBootSec=30s
OnUnitActiveSec=10s
AccuracySec=1s

[Install]
WantedBy=timers.target
EOF
</pre>
<br />
<span>To enable and start the timer, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># systemctl daemon-reload</font></i>
[root@r0 ~]<i><font color="silver"># systemctl enable nfs-mount-monitor.timer</font></i>
[root@r0 ~]<i><font color="silver"># systemctl start nfs-mount-monitor.timer</font></i>

<i><font color="silver"># Check status</font></i>
[root@r0 ~]<i><font color="silver"># systemctl status nfs-mount-monitor.timer</font></i>
● nfs-mount-monitor.timer - Run NFS Mount Health Monitor every <font color="#000000">10</font> seconds
     Loaded: loaded (/etc/systemd/system/nfs-mount-monitor.timer; enabled)
     Active: active (waiting) since Sat <font color="#000000">2025</font>-<font color="#000000">07</font>-<font color="#000000">06</font> <font color="#000000">10</font>:<font color="#000000">00</font>:<font color="#000000">00</font> EEST
    Trigger: Sat <font color="#000000">2025</font>-<font color="#000000">07</font>-<font color="#000000">06</font> <font color="#000000">10</font>:<font color="#000000">00</font>:<font color="#000000">10</font> EEST; 8s left

<i><font color="silver"># Monitor logs</font></i>
[root@r0 ~]<i><font color="silver"># journalctl -u nfs-mount-monitor -f</font></i>
</pre>
<br />
<span>Note: Stale file handles are inherent to NFS failover because file handles are server-specific. The best approach depends on your application&#39;s tolerance for brief disruptions. Of course, all the changes made to <span class='inlinecode'>r0</span> above must also be applied to <span class='inlinecode'>r1</span> and <span class='inlinecode'>r2</span>.</span><br />
<br />
<span class='quote'>Updated Wed 19 Mar 2026: Added automatic pod restart after NFS remount</span><br />
<br />
<span>The script now also tracks whether a mount was fixed via the <span class='inlinecode'>MOUNT_FIXED</span> variable. After a successful remount, it queries kubectl for pods on the local node that are stuck in <span class='inlinecode'>Unknown</span>, <span class='inlinecode'>Pending</span>, or <span class='inlinecode'>ContainerCreating</span> state and force-deletes them. Kubernetes then automatically reschedules these pods, which will now succeed because the NFS mount is healthy again. Without this, pods that hit a stale mount would remain broken until manually deleted, even after the underlying NFS issue was resolved.</span><br />
<br />
<h3 style='display: inline' id='complete-failover-test'>Complete Failover Test</h3><br />
<br />
<span>Here&#39;s a comprehensive test of the failover behaviour with all optimisations in place:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># 1. Check the initial state</font></i>
paul@f0:~ % ifconfig re0 | grep carp
    carp: MASTER vhid <font color="#000000">1</font> advbase <font color="#000000">1</font> advskew <font color="#000000">0</font>
paul@f1:~ % ifconfig re0 | grep carp
    carp: BACKUP vhid <font color="#000000">1</font> advbase <font color="#000000">1</font> advskew <font color="#000000">100</font>

<i><font color="silver"># 2. Create a test file from a client</font></i>
[root@r0 ~]<i><font color="silver"># echo "test before failover" &gt; /data/nfs/k3svolumes/test-before.txt</font></i>

<i><font color="silver"># 3. Trigger failover (f0 → f1)</font></i>
paul@f0:~ % doas ifconfig re0 vhid <font color="#000000">1</font> state backup

<i><font color="silver"># 4. Monitor client behaviour</font></i>
[root@r0 ~]<i><font color="silver"># ls /data/nfs/k3svolumes/</font></i>
ls: cannot access <font color="#808080">'/data/nfs/k3svolumes/'</font>: Stale file handle

<i><font color="silver"># 5. Check automatic recovery (within 10 seconds)</font></i>
[root@r0 ~]<i><font color="silver"># journalctl -u nfs-mount-monitor -f</font></i>
Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">32</font> r0 nfs-monitor[<font color="#000000">1234</font>]: NFS mount unhealthy detected at \
  Sun Jul <font color="#000000">6</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">32</font> EEST <font color="#000000">2025</font>
Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">32</font> r0 nfs-monitor[<font color="#000000">1234</font>]: Attempting to fix stale NFS mount at \
  Sun Jul <font color="#000000">6</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">32</font> EEST <font color="#000000">2025</font>
Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">33</font> r0 nfs-monitor[<font color="#000000">1234</font>]: NFS mount fixed at \
  Sun Jul <font color="#000000">6</font> <font color="#000000">10</font>:<font color="#000000">15</font>:<font color="#000000">33</font> EEST <font color="#000000">2025</font>
</pre>
<br />
<span>Failover Timeline:</span><br />
<br />
<ul>
<li>0 seconds: CARP failover triggered</li>
<li>0-2 seconds: Clients get "Stale file handle" errors (not hanging)</li>
<li>3-10 seconds: Soft mounts ensure quick failure of operations</li>
<li>Within 10 seconds: Automatic recovery via systemd timer</li>
</ul><br />
<span>Benefits of the Optimised Setup:</span><br />
<br />
<ul>
<li>No hanging processes - Soft mounts fail quickly</li>
<li>Clean failover - Old server stops serving immediately</li>
<li>Fast automatic recovery - No manual intervention needed</li>
<li>Predictable timing - Recovery within 10 seconds with systemd timer</li>
<li>Better visibility - systemd journal provides detailed logs</li>
</ul><br />
<span>Important Considerations:</span><br />
<br />
<ul>
<li>Recent writes (within 1 minute) may not be visible after failover due to replication lag</li>
<li>Applications should handle brief NFS errors gracefully</li>
<li>For zero-downtime requirements, consider synchronous replication or distributed storage (see "Future storage explorations" section later in this blog post)</li>
</ul><br />
<h2 style='display: inline' id='update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</h2><br />
<br />
<span class='quote'>Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node!</span><br />
<br />
<h3 style='display: inline' id='upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</h3><br />
<br />
<span>Since f1 is the replication sink, the upgrade was straightforward:</span><br />
<br />
<ul>
<li>1. Physically replaced the 1TB drive with the 4TB drive</li>
<li>2. Re-setup the drive as described earlier in this blog post</li>
<li>3. Re-replicated all data from f0 to f1 via zrepl</li>
<li>4. Reloaded the encryption keys as described in this blog post</li>
<li>5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)</li>
</ul><br />
<h3 style='display: inline' id='upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</h3><br />
<br />
<span>For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:</span><br />
<br />
<ul>
<li>1. Plugged the new 4TB drive into an external USB SSD drive reader</li>
<li>2. Attached the 4TB drive to the zdata pool for resilvering</li>
<li>3. Once resilvering completed, detached the 1TB drive from the zdata pool</li>
<li>4. Shutdown f0 and physically replaced the internal drive</li>
<li>5. Booted with the new drive in place</li>
<li>6. Expanded the pool to use the full 4TB capacity:</li>
</ul><br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zpool online -e /dev/ada<font color="#000000">1</font>
</pre>
<br />
<ul>
<li>7. Reloaded the encryption keys as described in this blog post</li>
<li>8. Set the mount point again for the encrypted dataset</li>
</ul><br />
<span>This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zdata  <font color="#000000">3</font>.63T   677G  <font color="#000000">2</font>.97T        -         -     <font color="#000000">3</font>%    <font color="#000000">18</font>%  <font color="#000000">1</font>.00x    ONLINE  -
zroot   472G  <font color="#000000">68</font>.4G   404G        -         -    <font color="#000000">13</font>%    <font color="#000000">14</font>%  <font color="#000000">1</font>.00x    ONLINE  -

paul@f0:~ % doas camcontrol devlist
&lt;512GB SSD D910R170&gt;               at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
&lt;SD Ultra 3D 4TB 530500WD&gt;         at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt;          at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
</pre>
<br />
<span>We&#39;re still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:~ % doas camcontrol devlist
&lt;512GB SSD D910R170&gt;               at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
&lt;WD Blue SA510 <font color="#000000">2.5</font> 4TB 530500WD&gt;   at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt;          at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
</pre>
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>We&#39;ve built a robust, encrypted storage system for our FreeBSD-based Kubernetes cluster that provides:</span><br />
<br />
<ul>
<li>High Availability: CARP ensures the storage VIP moves automatically during failures</li>
<li>Data Protection: ZFS encryption protects data at rest, stunnel protects data in transit</li>
<li>Continuous Replication: 1-minute RPO for the data, automated via <span class='inlinecode'>zrepl</span></li>
<li>Secure Access: Client certificate authentication prevents unauthorised access</li>
</ul><br />
<span>Some key lessons learned are:</span><br />
<br />
<ul>
<li>Stunnel vs Native NFS/TLS: While native encryption would be ideal, stunnel provides better cross-platform compatibility</li>
<li>Manual vs Automatic Failover: For storage systems, controlled failover often prevents more problems than it causes</li>
<li>Client Compatibility: Different NFS implementations behave differently - test thoroughly</li>
</ul><br />
<h2 style='display: inline' id='future-storage-explorations'>Future Storage Explorations</h2><br />
<br />
<span>While <span class='inlinecode'>zrepl</span> provides excellent snapshot-based replication for disaster recovery, there are other storage technologies worth exploring for the f3s project:</span><br />
<br />
<h3 style='display: inline' id='minio-for-s3-compatible-object-storage'>MinIO for S3-Compatible Object Storage</h3><br />
<br />
<span>MinIO is a high-performance, S3-compatible object storage system that could complement our ZFS-based storage. Some potential use cases:</span><br />
<br />
<ul>
<li>S3 API compatibility: Many modern applications expect S3-style object storage APIs. MinIO could provide this interface while using our ZFS storage as the backend.</li>
<li>Multi-site replication: MinIO supports active-active replication across multiple sites, which could work well with our f0/f1/f2 node setup.</li>
<li>Kubernetes native: MinIO has excellent Kubernetes integration with operators and CSI drivers, making it ideal for the f3s k3s environment.</li>
</ul><br />
<h3 style='display: inline' id='moosefs-for-distributed-high-availability'>MooseFS for Distributed High Availability</h3><br />
<br />
<span>MooseFS is a fault-tolerant, distributed file system that could provide proper high-availability storage:</span><br />
<br />
<ul>
<li>True HA: Unlike our current setup, which requires manual failover, MooseFS provides automatic failover with no single point of failure.</li>
<li>POSIX compliance: Applications can use MooseFS like any regular filesystem, no code changes needed.</li>
<li>Flexible redundancy: Configure different replication levels per directory or file, optimising storage efficiency.</li>
<li>FreeBSD support: MooseFS has native FreeBSD support, making it a natural fit for the f3s project.</li>
</ul><br />
<span>Both technologies could run on top of our encrypted ZFS volumes, combining ZFS&#39;s data integrity and encryption features with distributed storage capabilities. This would be particularly interesting for workloads that need either S3-compatible APIs (MinIO) or transparent distributed POSIX storage (MooseFS). What about Ceph and GlusterFS? Unfortunately, there doesn&#39;t seem to be great native FreeBSD support for them. However, other alternatives also appear suitable for my use case.</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Posts from January to June 2025</title>
        <link href="https://foo.zone/gemfeed/2025-07-01-posts-from-january-to-june-2025.html" />
        <id>https://foo.zone/gemfeed/2025-07-01-posts-from-january-to-june-2025.html</id>
        <updated>2025-07-01T22:39:29+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>These are my social media posts from the last six months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay. </summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='posts-from-january-to-june-2025'>Posts from January to June 2025</h1><br />
<br />
<span class='quote'>Published at 2025-07-01T22:39:29+03:00</span><br />
<br />
<span>These are my social media posts from the last six months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay. </span><br />
<br />
<span>These are from Mastodon and LinkedIn. Have a look at my about page for my social media profiles. This list is generated with Gos, my social media platform sharing tool.</span><br />
<br />
<a class='textlink' href='../about/index.html'>My about page</a><br />
<a class='textlink' href='https://codeberg.org/snonux/gos'>https://codeberg.org/snonux/gos</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#posts-from-january-to-june-2025'>Posts from January to June 2025</a></li>
<li>⇢ <a href='#january-2025'>January 2025</a></li>
<li>⇢ ⇢ <a href='#i-am-currently-binge-listening-to-the-google-'>I am currently binge-listening to the Google ...</a></li>
<li>⇢ ⇢ <a href='#recently-there-was-a-5000-loc-bash-'>Recently, there was a &gt;5000 LOC <span class='inlinecode'>#bash</span> ...</a></li>
<li>⇢ ⇢ <a href='#ghostty-is-a-terminal-emulator-that-was-'>Ghostty is a terminal emulator that was ...</a></li>
<li>⇢ ⇢ <a href='#go-is-not-an-easy-programming-language-don-t-'>Go is not an easy programming language. Don&#39;t ...</a></li>
<li>⇢ ⇢ <a href='#how-will-ai-change-software-engineering-or-has-'>How will AI change software engineering (or has ...</a></li>
<li>⇢ ⇢ <a href='#eliminating-toil---toil-is-not-always-a-bad-'>Eliminating toil - Toil is not always a bad ...</a></li>
<li>⇢ ⇢ <a href='#fun-read-how-about-using-the-character-'>Fun read. How about using the character ...</a></li>
<li>⇢ ⇢ <a href='#thats-unexpected-you-cant-remove-a-nan-key-'>Thats unexpected, you cant remove a NaN key ...</a></li>
<li>⇢ ⇢ <a href='#nice-refresher-for-shell-bash-zsh-'>Nice refresher for <span class='inlinecode'>#shell</span> <span class='inlinecode'>#bash</span> <span class='inlinecode'>#zsh</span> ...</a></li>
<li>⇢ ⇢ <a href='#i-think-discussing-action-items-in-incident-'>I think discussing action items in incident ...</a></li>
<li>⇢ ⇢ <a href='#at-first-functional-options-add-a-bit-of-'>At first, functional options add a bit of ...</a></li>
<li>⇢ ⇢ <a href='#in-the-working-with-an-sre-interview-i-have-'>In the "Working with an SRE Interview" I have ...</a></li>
<li>⇢ ⇢ <a href='#small-introduction-to-the-android-'>Small introduction to the <span class='inlinecode'>#Android</span> ...</a></li>
<li>⇢ ⇢ <a href='#helix-202501-has-been-released-the-completion-'>Helix 2025.01 has been released. The completion ...</a></li>
<li>⇢ ⇢ <a href='#i-found-these-are-excellent-examples-of-how-'>I found these are excellent examples of how ...</a></li>
<li>⇢ ⇢ <a href='#llms-for-ops-summaries-of-logs-probabilities-'>LLMs for Ops? Summaries of logs, probabilities ...</a></li>
<li>⇢ ⇢ <a href='#enjoying-an-apc-power-ups-bx750mi-in-my-'>Enjoying an APC Power-UPS BX750MI in my ...</a></li>
<li>⇢ ⇢ <a href='#even-in-the-projects-where-i-m-the-only-'>"Even in the projects where I&#39;m the only ...</a></li>
<li>⇢ ⇢ <a href='#connecting-an-ups-to-my-freebsd-cluster-'>Connecting an <span class='inlinecode'>#UPS</span> to my <span class='inlinecode'>#FreeBSD</span> cluster ...</a></li>
<li>⇢ ⇢ <a href='#so-the-co-founder-and-cto-of-honeycombio-and-'>So, the Co-founder and CTO of honeycomb.io and ...</a></li>
<li>⇢ <a href='#february-2025'>February 2025</a></li>
<li>⇢ ⇢ <a href='#i-don-t-know-about-you-but-at-work-i-usually-'>I don&#39;t know about you, but at work, I usually ...</a></li>
<li>⇢ ⇢ <a href='#great-proposal-got-accepted-by-the-goteam-for-'>Great proposal (got accepted by the Goteam) for ...</a></li>
<li>⇢ ⇢ <a href='#my-gemtexter-has-only-1320-loc-the-biggest-'>My Gemtexter has only 1320 LOC.... The Biggest ...</a></li>
<li>⇢ ⇢ <a href='#against-tmp---he-is-making-a-point-unix-'>Against /tmp - He is making a point <span class='inlinecode'>#unix</span> ...</a></li>
<li>⇢ ⇢ <a href='#random-weird-things-part-2-blog-'>Random Weird Things Part 2: <span class='inlinecode'>#blog</span> ...</a></li>
<li>⇢ ⇢ <a href='#as-a-former-pebble-user-and-fan-thats-'>As a former <span class='inlinecode'>#Pebble</span> user and fan, thats ...</a></li>
<li>⇢ ⇢ <a href='#i-think-i-am-slowly-getting-the-point-of-cue-'>I think I am slowly getting the point of Cue. ...</a></li>
<li>⇢ ⇢ <a href='#jonathan-s-reflection-of-10-years-of-'>Jonathan&#39;s reflection of 10 years of ...</a></li>
<li>⇢ ⇢ <a href='#really-enjoyed-reading-this-easily-digestible-'>Really enjoyed reading this. Easily digestible ...</a></li>
<li>⇢ ⇢ <a href='#some-great-advice-from-40-years-of-experience-'>Some great advice from 40 years of experience ...</a></li>
<li>⇢ ⇢ <a href='#i-enjoyed-this-talk-some-recipes-i-knew-'>I enjoyed this talk, some recipes I knew ...</a></li>
<li>⇢ ⇢ <a href='#a-way-of-how-to-add-the-version-info-to-the-go-'>A way of how to add the version info to the Go ...</a></li>
<li>⇢ ⇢ <a href='#in-other-words-using-tparallel-for-'>In other words, using t.Parallel() for ...</a></li>
<li>⇢ ⇢ <a href='#neat-little-blog-post-showcasing-various-'>Neat little blog post, showcasing various ...</a></li>
<li>⇢ ⇢ <a href='#the-smallest-thing-in-go-golang-'>The smallest thing in Go <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#fun-with-defer-in-golang-i-did-t-know-that-'>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that ...</a></li>
<li>⇢ ⇢ <a href='#what-i-like-about-go-is-that-it-is-still-'>What I like about Go is that it is still ...</a></li>
<li>⇢ <a href='#march-2025'>March 2025</a></li>
<li>⇢ ⇢ <a href='#television-has-somewhat-transformed-how-i-work-'>Television has somewhat transformed how I work ...</a></li>
<li>⇢ ⇢ <a href='#once-in-a-while-i-like-to-read-a-book-about-a-'>Once in a while, I like to read a book about a ...</a></li>
<li>⇢ ⇢ <a href='#as-you-may-have-noticed-i-like-to-share-on-'>As you may have noticed, I like to share on ...</a></li>
<li>⇢ ⇢ <a href='#personally-i-think-ai-llms-are-pretty-'>Personally, I think AI (LLMs) are pretty ...</a></li>
<li>⇢ ⇢ <a href='#type-aliases-in-golang-soon-also-work-with-'>Type aliases in <span class='inlinecode'>#golang</span>, soon also work with ...</a></li>
<li>⇢ ⇢ <a href='#perl-my-first-love-of-programming-'><span class='inlinecode'>#Perl</span>, my "first love" of programming ...</a></li>
<li>⇢ ⇢ <a href='#i-guess-there-are-valid-reasons-for-phttpdget-'>I guess there are valid reasons for phttpdget, ...</a></li>
<li>⇢ ⇢ <a href='#this-is-one-of-the-reasons-why-i-like-'>This is one of the reasons why I like ...</a></li>
<li>⇢ ⇢ <a href='#advanced-concurrency-patterns-with-golang-'>Advanced Concurrency Patterns with <span class='inlinecode'>#Golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#sqlite-was-designed-as-an-tcl-extension-'><span class='inlinecode'>#SQLite</span> was designed as an <span class='inlinecode'>#TCL</span> extension. ...</a></li>
<li>⇢ ⇢ <a href='#git-provides-automatic-rendering-of-markdown-'>Git provides automatic rendering of Markdown ...</a></li>
<li>⇢ ⇢ <a href='#these-are-some-neat-little-go-tips-linters-'>These are some neat little Go tips. Linters ...</a></li>
<li>⇢ ⇢ <a href='#this-is-a-great-introductory-blog-post-about-'>This is a great introductory blog post about ...</a></li>
<li>⇢ ⇢ <a href='#maps-in-go-under-the-hood-golang-'>Maps in Go under the hood <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#i-found-that-working-on-multiple-side-projects-'>I found that working on multiple side projects ...</a></li>
<li>⇢ ⇢ <a href='#i-have-been-in-incidents-understandably-'>I have been in incidents. Understandably, ...</a></li>
<li>⇢ ⇢ <a href='#i-dont-understand-what-it-is-certificates-are-'>I dont understand what it is. Certificates are ...</a></li>
<li>⇢ ⇢ <a href='#don-t-just-blindly-trust-llms-i-recently-'>Don&#39;t just blindly trust LLMs. I recently ...</a></li>
<li>⇢ <a href='#april-2025'>April 2025</a></li>
<li>⇢ ⇢ <a href='#i-knew-about-any-being-equivalent-to-'>I knew about any being equivalent to ...</a></li>
<li>⇢ ⇢ <a href='#neat-summary-of-new-perl-features-per-'>Neat summary of new <span class='inlinecode'>#Perl</span> features per ...</a></li>
<li>⇢ ⇢ <a href='#errorsas-checks-for-the-error-type-whereas-'>errors.As() checks for the error type, whereas ...</a></li>
<li>⇢ ⇢ <a href='#good-stuff-10-years-of-functional-options-and-'>Good stuff: 10 years of functional options and ...</a></li>
<li>⇢ ⇢ <a href='#i-had-some-fun-with-freebsd-bhyve-and-'>I had some fun with <span class='inlinecode'>#FreeBSD</span>, <span class='inlinecode'>#Bhyve</span> and ...</a></li>
<li>⇢ ⇢ <a href='#the-moment-your-blog-receives-prs-for-typo-'>The moment your blog receives PRs for typo ...</a></li>
<li>⇢ ⇢ <a href='#one-thing-not-mentioned-is-that-openrsync-s-'>One thing not mentioned is that <span class='inlinecode'>#OpenRsync</span>&#39;s ...</a></li>
<li>⇢ ⇢ <a href='#this-is-an-interesting-elixir-pipes-operator-'>This is an interesting <span class='inlinecode'>#Elixir</span> pipes operator ...</a></li>
<li>⇢ ⇢ <a href='#the-story-of-how-my-favorite-golang-book-was-'>The story of how my favorite <span class='inlinecode'>#Golang</span> book was ...</a></li>
<li>⇢ ⇢ <a href='#these-are-my-personal-book-notes-from-daniel-'>These are my personal book notes from Daniel ...</a></li>
<li>⇢ ⇢ <a href='#i-certainly-learned-a-lot-reading-this-llm-'>I certainly learned a lot reading this <span class='inlinecode'>#llm</span> ...</a></li>
<li>⇢ ⇢ <a href='#writing-indempotent-bash-scripts-'>Writing indempotent <span class='inlinecode'>#Bash</span> scripts ...</a></li>
<li>⇢ ⇢ <a href='#regarding-ai-for-code-generation-you-should-'>Regarding <span class='inlinecode'>#AI</span> for code generation. You should ...</a></li>
<li>⇢ ⇢ <a href='#i-like-the-rocky-metaphor-and-this-post-also-'>I like the Rocky metaphor. And this post also ...</a></li>
<li>⇢ <a href='#may-2025'>May 2025</a></li>
<li>⇢ ⇢ <a href='#there-s-now-also-a-fish-shell-edition-of-my-'>There&#39;s now also a <span class='inlinecode'>#Fish</span> shell edition of my ...</a></li>
<li>⇢ ⇢ <a href='#i-loved-this-talk-it-s-about-how-you-can-'>I loved this talk. It&#39;s about how you can ...</a></li>
<li>⇢ ⇢ <a href='#some-unexpected-golang-stuff-ppl-say-that-'>Some unexpected <span class='inlinecode'>#golang</span> stuff, ppl say, that ...</a></li>
<li>⇢ ⇢ <a href='#with-the-advent-of-ai-and-llms-i-have-observed-'>With the advent of AI and LLMs, I have observed ...</a></li>
<li>⇢ ⇢ <a href='#for-science-fun-and-profit-i-set-up-a-'>For science, fun and profit, I set up a ...</a></li>
<li>⇢ ⇢ <a href='#ever-wondered-about-the-hung-task-linux-'>Ever wondered about the hung task Linux ...</a></li>
<li>⇢ ⇢ <a href='#a-bit-of-fun-the-fortran-hating-gateway--'>A bit of <span class='inlinecode'>#fun</span>: The FORTRAN hating gateway ― ...</a></li>
<li>⇢ ⇢ <a href='#so-golang-was-invented-while-engineers-at-'>So, Golang was invented while engineers at ...</a></li>
<li>⇢ ⇢ <a href='#i-couldn-t-do-without-here-docs-if-they-did-'>I couldn&#39;t do without here-docs. If they did ...</a></li>
<li>⇢ ⇢ <a href='#i-started-using-computers-as-a-kid-on-ms-dos-'>I started using computers as a kid on MS-DOS ...</a></li>
<li>⇢ ⇢ <a href='#thats-interesting-running-android-in-'>Thats interesting, running <span class='inlinecode'>#Android</span> in ...</a></li>
<li>⇢ ⇢ <a href='#before-wiping-the-pre-installed-windows-11-'>Before wiping the pre-installed <span class='inlinecode'>#Windows</span> 11 ...</a></li>
<li>⇢ ⇢ <a href='#some-might-hate-me-saying-this-but-didnt-'>Some might hate me saying this, but didnt ...</a></li>
<li>⇢ ⇢ <a href='#wouldn-t-still-do-that-even-with-100-test-'>Wouldn&#39;t still do that, even with 100% test ...</a></li>
<li>⇢ ⇢ <a href='#some-neat-slice-tricks-for-go-golang-'>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span> ...</a></li>
<li>⇢ ⇢ <a href='#i-understand-that-kubernetes-is-not-for-'>I understand that Kubernetes is not for ...</a></li>
<li>⇢ <a href='#june-2025'>June 2025</a></li>
<li>⇢ ⇢ <a href='#some-great-advices-will-try-out-some-of-them-'>Some great advices, will try out some of them! ...</a></li>
<li>⇢ ⇢ <a href='#in-golang-values-are-actually-copied-when-'>In <span class='inlinecode'>#Golang</span>, values are actually copied when ...</a></li>
<li>⇢ ⇢ <a href='#this-is-a-great-little-tutorial-for-searching-'>This is a great little tutorial for searching ...</a></li>
<li>⇢ ⇢ <a href='#the-mov-instruction-of-a-cpu-is-turing-'>The mov instruction of a CPU is turing ...</a></li>
<li>⇢ ⇢ <a href='#i-removed-the-social-media-profile-from-my-'>I removed the social media profile from my ...</a></li>
<li>⇢ ⇢ <a href='#so-want-a-real-recent-unix-use-aix-macos-'>So want a "real" recent UNIX? Use AIX! <span class='inlinecode'>#macos</span> ...</a></li>
<li>⇢ ⇢ <a href='#this-episode-i-think-is-kind-of-an-eye-opener-'>This episode, I think, is kind of an eye-opener ...</a></li>
<li>⇢ ⇢ <a href='#my-openbsd-blog-setup-got-mentioned-in-the-'>My <span class='inlinecode'>#OpenBSD</span> blog setup got mentioned in the ...</a></li>
<li>⇢ ⇢ <a href='#golang-is-the-best-when-it-comes-to-agentic-'><span class='inlinecode'>#Golang</span> is the best when it comes to agentic ...</a></li>
<li>⇢ ⇢ <a href='#where-zsh-is-better-than-bash-'>Where <span class='inlinecode'>#zsh</span> is better than <span class='inlinecode'>#bash</span> ...</a></li>
<li>⇢ ⇢ <a href='#i-really-enjoyed-this-talk-about-obscure-go-'>I really enjoyed this talk about obscure Go ...</a></li>
<li>⇢ ⇢ <a href='#commenting-your-regular-expression-is-generally-'>Commenting your regular expression is generally ...</a></li>
<li>⇢ ⇢ <a href='#you-have-to-make-a-decision-for-yourself-but-'>You have to make a decision for yourself, but ...</a></li>
<li>⇢ ⇢ <a href='#100-go-mistakes-and-how-to-avoid-them-is-one-'>"100 Go Mistakes and How to Avoid Them" is one ...</a></li>
<li>⇢ ⇢ <a href='#the-ruby-data-class-seems-quite-helpful-'>The <span class='inlinecode'>#Ruby</span> Data class seems quite helpful ...</a></li>
</ul><br />
<h2 style='display: inline' id='january-2025'>January 2025</h2><br />
<br />
<h3 style='display: inline' id='i-am-currently-binge-listening-to-the-google-'>I am currently binge-listening to the Google ...</h3><br />
<br />
<span>I am currently binge-listening to the Google <span class='inlinecode'>#SRE</span> ProdCast. It&#39;s really great to learn about the stories of individual SREs and their journeys. It is not just about SREs at Google; there are also external guests.</span><br />
<br />
<a class='textlink' href='https://sre.google/prodcast/'>sre.google/prodcast/</a><br />
<br />
<h3 style='display: inline' id='recently-there-was-a-5000-loc-bash-'>Recently, there was a &gt;5000 LOC <span class='inlinecode'>#bash</span> ...</h3><br />
<br />
<span>Recently, there was a &gt;5000 LOC <span class='inlinecode'>#bash</span> codebase at work that reported the progress of a migration, nobody understood it and it was wonky (sometimes it would not return the desired results). On top of that, the coding style was very bad as well (I could rant forever here). The engineer who wrote it left the company. I rewrote it in <span class='inlinecode'>#Perl</span> in about 300 LOC. Colleagues asked why not Python. Perl is the perfect choice here—it&#39;s even in its name: Practical Extraction and Report Language!</span><br />
<br />
<h3 style='display: inline' id='ghostty-is-a-terminal-emulator-that-was-'>Ghostty is a terminal emulator that was ...</h3><br />
<br />
<span>Ghostty is a terminal emulator that was recently released publicly as open-source. I love that it works natively on both Linux and macOS; it looks great (font rendering) and is fast and customizable via a config file (which I manage with a config mng system). Ghostty is a passion project written in Zig, the author loved the community so much while working on it that he donated $300k to the Zig Foundation. <span class='inlinecode'>#terminal</span> <span class='inlinecode'>#emulator</span></span><br />
<br />
<a class='textlink' href='https://ghostty.org'>ghostty.org</a><br />
<br />
<h3 style='display: inline' id='go-is-not-an-easy-programming-language-don-t-'>Go is not an easy programming language. Don&#39;t ...</h3><br />
<br />
<span>Go is not an easy programming language. Don&#39;t confuse easy with simple syntax. I&#39;d agree to this. With the recent addition of Generics to the language I also feel that even the syntax stops being simple.. Also, simplicity is complex (especially under the hood how the language works - there are many mechanics you need to know if you really want to master the language). <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.arp242.net/go-easy.html'>www.arp242.net/go-easy.html</a><br />
<br />
<h3 style='display: inline' id='how-will-ai-change-software-engineering-or-has-'>How will AI change software engineering (or has ...</h3><br />
<br />
<span>How will AI change software engineering (or has it already)? The bottom line is that less experienced engineers may have problems (accepting incomplete or incorrect programs, only reaching 70 percent solutions), while experienced engineers can leverage AI to boost their performance as they know how to fix the remaining 30 percent of the generated code. <span class='inlinecode'>#ai</span> <span class='inlinecode'>#engineering</span> <span class='inlinecode'>#software</span></span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/how-ai-will-change-software-engineering'>newsletter.pragmaticengineer.com/p/how-ai-will-change-software-engineering</a><br />
<br />
<h3 style='display: inline' id='eliminating-toil---toil-is-not-always-a-bad-'>Eliminating toil - Toil is not always a bad ...</h3><br />
<br />
<span>Eliminating toil - Toil is not always a bad thing - some even enjoy toil - it is calming in small amounts - but it becomes toxic in large amounts - <span class='inlinecode'>#SRE</span></span><br />
<br />
<a class='textlink' href='https://sre.google/sre-book/eliminating-toil/'>sre.google/sre-book/eliminating-toil/</a><br />
<br />
<h3 style='display: inline' id='fun-read-how-about-using-the-character-'>Fun read. How about using the character ...</h3><br />
<br />
<span>Fun read. How about using the character sequence :-) as a statement separator in a programming language?</span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/researching-why-we-use-semicolons-as-statement-terminators/'>ntietz.com/blog/researching-why-we-use-semicolons-as-statement-terminators/</a><br />
<br />
<h3 style='display: inline' id='thats-unexpected-you-cant-remove-a-nan-key-'>Thats unexpected, you cant remove a NaN key ...</h3><br />
<br />
<span>Thats unexpected, you cant remove a NaN key from a map without clearing it! <span class='inlinecode'>#golang</span> via @wallabagapp</span><br />
<br />
<a class='textlink' href='https://unexpected-go.com/you-cant-remove-a-nan-key-from-a-map-without-clearing-it.html'>unexpected-go.com/you-cant-remove-a-nan-key-from-a-map-without-clearing-it.html</a><br />
<br />
<h3 style='display: inline' id='nice-refresher-for-shell-bash-zsh-'>Nice refresher for <span class='inlinecode'>#shell</span> <span class='inlinecode'>#bash</span> <span class='inlinecode'>#zsh</span> ...</h3><br />
<br />
<span>Nice refresher for <span class='inlinecode'>#shell</span> <span class='inlinecode'>#bash</span> <span class='inlinecode'>#zsh</span> redirection rules</span><br />
<br />
<a class='textlink' href='https://rednafi.com/misc/shell_redirection/'>rednafi.com/misc/shell_redirection/</a><br />
<br />
<h3 style='display: inline' id='i-think-discussing-action-items-in-incident-'>I think discussing action items in incident ...</h3><br />
<br />
<span>I think discussing action items in incident reviews is important. At least the obvious should be captured and noted down. It does not mean that the action items need to be fully refined in the review meeting; that would be out of scope, in my opinion.</span><br />
<br />
<a class='textlink' href='https://surfingcomplexity.blog/2024/09/28/why-i-dont-like-discussing-action-items-during-incident-reviews/'>surfingcomplexity.blog/2024/09/28/why-..-..-action-items-during-incident-reviews/</a><br />
<br />
<h3 style='display: inline' id='at-first-functional-options-add-a-bit-of-'>At first, functional options add a bit of ...</h3><br />
<br />
<span>At first, functional options add a bit of boilerplate, but they turn out to be quite neat, especially when you have very long parameter lists that need to be made neat and tidy. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.calhoun.io/using-functional-options-instead-of-method-chaining-in-go/'>www.calhoun.io/using-functional-options-instead-of-method-chaining-in-go/</a><br />
<br />
<h3 style='display: inline' id='in-the-working-with-an-sre-interview-i-have-'>In the "Working with an SRE Interview" I have ...</h3><br />
<br />
<span>In the "Working with an SRE Interview" I have been askd about what it&#39;s like working with an SRE! We&#39;d covered much more in depth, but we decided not to make it too long in the final version! <span class='inlinecode'>#sre</span> <span class='inlinecode'>#interview</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.gmi'>foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.html'>foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.html</a><br />
<br />
<h3 style='display: inline' id='small-introduction-to-the-android-'>Small introduction to the <span class='inlinecode'>#Android</span> ...</h3><br />
<br />
<span>Small introduction to the <span class='inlinecode'>#Android</span> distribution called <span class='inlinecode'>#GrapheneOS</span> For myself, I am using a Pixel 7 Pro, which comes with "only" 5 years of support (not yet 7 years like the Pixel 8 and 9 series). I also wrote about GrapheneOS here once:</span><br />
<br />
<a class='textlink' href='https://dataswamp.org/~solene/2025-01-12-intro-to-grapheneos.html'>dataswamp.org/~solene/2025-01-12-intro-to-grapheneos.html</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.gmi'>foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.html'>foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.html</a><br />
<br />
<h3 style='display: inline' id='helix-202501-has-been-released-the-completion-'>Helix 2025.01 has been released. The completion ...</h3><br />
<br />
<span>Helix 2025.01 has been released. The completion of path names and the snippet functionality will be particularly useful for me. Overall, it&#39;s a great release. The release notes cover only some highlights, but there are many more changes in this version so also have a look at the Changelog! <span class='inlinecode'>#HelixEditor</span></span><br />
<br />
<a class='textlink' href='https://helix-editor.com/news/release-25-01-highlights/'>helix-editor.com/news/release-25-01-highlights/</a><br />
<br />
<h3 style='display: inline' id='i-found-these-are-excellent-examples-of-how-'>I found these are excellent examples of how ...</h3><br />
<br />
<span>I found these are excellent examples of how <span class='inlinecode'>#OpenBSD</span>&#39;s <span class='inlinecode'>#relayd</span> can be used.</span><br />
<br />
<a class='textlink' href='https://www.tumfatig.net/2023/using-openbsd-relayd8-as-an-application-layer-gateway/'>www.tumfatig.net/2023/using-openbsd-relayd8-as-an-application-layer-gateway/</a><br />
<br />
<h3 style='display: inline' id='llms-for-ops-summaries-of-logs-probabilities-'>LLMs for Ops? Summaries of logs, probabilities ...</h3><br />
<br />
<span>LLMs for Ops? Summaries of logs, probabilities about correctness, auto-generating Ansible, some uses cases are there. Wouldn&#39;t trust it fully, though.</span><br />
<br />
<a class='textlink' href='https://youtu.be/WodaffxVq-E?si=noY0egrfl5izCSQI'>youtu.be/WodaffxVq-E?si=noY0egrfl5izCSQI</a><br />
<br />
<h3 style='display: inline' id='enjoying-an-apc-power-ups-bx750mi-in-my-'>Enjoying an APC Power-UPS BX750MI in my ...</h3><br />
<br />
<span>Enjoying an APC Power-UPS BX750MI in my <span class='inlinecode'>#homelab</span> with <span class='inlinecode'>#FreeBSD</span> and apcupsd. I can easily use the UPS status to auto-shutdown a cluster of FreeBSD machines on a power cut. One FreeBSD machine acts as the apcupsd master, connected via USB to the APC, while the remaining machines read the status remotely via the apcupsd network port from the master. However, it won&#39;t work when the master is down. <span class='inlinecode'>#APC</span> <span class='inlinecode'>#UPS</span></span><br />
<br />
<h3 style='display: inline' id='even-in-the-projects-where-i-m-the-only-'>"Even in the projects where I&#39;m the only ...</h3><br />
<br />
<span>"Even in the projects where I&#39;m the only person, there are at least three people involved: past me, present me, and future me." - Quote from <span class='inlinecode'>#software</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://liw.fi/40/#index1h1'>liw.fi/40/#index1h1</a><br />
<br />
<h3 style='display: inline' id='connecting-an-ups-to-my-freebsd-cluster-'>Connecting an <span class='inlinecode'>#UPS</span> to my <span class='inlinecode'>#FreeBSD</span> cluster ...</h3><br />
<br />
<span>Connecting an <span class='inlinecode'>#UPS</span> to my <span class='inlinecode'>#FreeBSD</span> cluster in my <span class='inlinecode'>#homelab</span>, protecting it from power cuts!</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.gmi'>foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html</a><br />
<br />
<h3 style='display: inline' id='so-the-co-founder-and-cto-of-honeycombio-and-'>So, the Co-founder and CTO of honeycomb.io and ...</h3><br />
<br />
<span>So, the Co-founder and CTO of honeycomb.io and author of the book Observability Engineering always hated observability. And Distinguished Software Engineer and The Pragmatic Engineer host can&#39;t pronounce the word Observability. :-) No, jokes aside, I liked this podcast episode of The Pragmatic Engineer: Observability: the present and future, with Charity Majors <span class='inlinecode'>#sre</span> <span class='inlinecode'>#observability</span></span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/observability-the-present-and-future'>newsletter.pragmaticengineer.com/p/observability-the-present-and-future</a><br />
<br />
<h2 style='display: inline' id='february-2025'>February 2025</h2><br />
<br />
<h3 style='display: inline' id='i-don-t-know-about-you-but-at-work-i-usually-'>I don&#39;t know about you, but at work, I usually ...</h3><br />
<br />
<span>I don&#39;t know about you, but at work, I usually deal with complex setups involving thousands of servers and work in a complex hybrid microservices-based environment (cloud and on-prem), where homelabbing (as simple as described in my blog post) is really relaxing and recreative. So, I was homelabbing a bit again, securing my <span class='inlinecode'>#FreeBSD</span> cluster from power cuts. <span class='inlinecode'>#UPS</span> <span class='inlinecode'>#recreative</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.gmi'>foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html</a><br />
<br />
<h3 style='display: inline' id='great-proposal-got-accepted-by-the-goteam-for-'>Great proposal (got accepted by the Goteam) for ...</h3><br />
<br />
<span>Great proposal (got accepted by the Goteam) for safer file system open functions <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://github.com/golang/go/issues/67002'>github.com/golang/go/issues/67002</a><br />
<br />
<h3 style='display: inline' id='my-gemtexter-has-only-1320-loc-the-biggest-'>My Gemtexter has only 1320 LOC.... The Biggest ...</h3><br />
<br />
<span>My Gemtexter has only 1320 LOC.... The Biggest Shell Programs in the World are huuuge... <span class='inlinecode'>#shell</span> <span class='inlinecode'>#sh</span></span><br />
<br />
<a class='textlink' href='https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell-Programs-in-the-World'>github.com/oils-for-unix/oils/wiki/The-Biggest-Shell-Programs-in-the-World</a><br />
<br />
<h3 style='display: inline' id='against-tmp---he-is-making-a-point-unix-'>Against /tmp - He is making a point <span class='inlinecode'>#unix</span> ...</h3><br />
<br />
<span>Against /tmp - He is making a point <span class='inlinecode'>#unix</span> <span class='inlinecode'>#linux</span> <span class='inlinecode'>#bsd</span> <span class='inlinecode'>#filesystem</span> via @wallabagapp</span><br />
<br />
<a class='textlink' href='https://dotat.at/@/2024-10-22-tmp.html'>dotat.at/@/2024-10-22-tmp.html</a><br />
<br />
<h3 style='display: inline' id='random-weird-things-part-2-blog-'>Random Weird Things Part 2: <span class='inlinecode'>#blog</span> ...</h3><br />
<br />
<span>Random Weird Things Part 2: <span class='inlinecode'>#blog</span> <span class='inlinecode'>#computing</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-08-random-weird-things-ii.gmi'>foo.zone/gemfeed/2025-02-08-random-weird-things-ii.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-02-08-random-weird-things-ii.html'>foo.zone/gemfeed/2025-02-08-random-weird-things-ii.html</a><br />
<br />
<h3 style='display: inline' id='as-a-former-pebble-user-and-fan-thats-'>As a former <span class='inlinecode'>#Pebble</span> user and fan, thats ...</h3><br />
<br />
<span>As a former <span class='inlinecode'>#Pebble</span> user and fan, thats aweaome news. PebbleOS is now open source and there will aoon be a new watch. I don&#39;t know about you, but I will be the first getting one :-) <span class='inlinecode'>#foss</span></span><br />
<br />
<a class='textlink' href='https://ericmigi.com/blog/why-were-bringing-pebble-back'>ericmigi.com/blog/why-were-bringing-pebble-back</a><br />
<br />
<h3 style='display: inline' id='i-think-i-am-slowly-getting-the-point-of-cue-'>I think I am slowly getting the point of Cue. ...</h3><br />
<br />
<span>I think I am slowly getting the point of Cue. For example, it can replace both a JSON file and a JSON Schema. Furthermore, you can convert it from and into different formats (Cue, JSON, YAML, Go data types, ...), and you can nicely embed this into a Go project as well. <span class='inlinecode'>#cue</span> <span class='inlinecode'>#cuelang</span> <span class='inlinecode'>#golang</span> <span class='inlinecode'>#configuration</span></span><br />
<br />
<a class='textlink' href='https://cuelang.org'>cuelang.org</a><br />
<br />
<h3 style='display: inline' id='jonathan-s-reflection-of-10-years-of-'>Jonathan&#39;s reflection of 10 years of ...</h3><br />
<br />
<span>Jonathan&#39;s reflection of 10 years of programming!</span><br />
<br />
<a class='textlink' href='https://jonathan-frere.com/posts/10-years-of-programming/'>jonathan-frere.com/posts/10-years-of-programming/</a><br />
<br />
<h3 style='display: inline' id='really-enjoyed-reading-this-easily-digestible-'>Really enjoyed reading this. Easily digestible ...</h3><br />
<br />
<span>Really enjoyed reading this. Easily digestible summary of what&#39;s new in Go 1.24. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://antonz.org/go-1-24/'>antonz.org/go-1-24/</a><br />
<br />
<h3 style='display: inline' id='some-great-advice-from-40-years-of-experience-'>Some great advice from 40 years of experience ...</h3><br />
<br />
<span>Some great advice from 40 years of experience as a software developer. <span class='inlinecode'>#software</span> <span class='inlinecode'>#development</span></span><br />
<br />
<a class='textlink' href='https://liw.fi/40/#index1h1'>liw.fi/40/#index1h1</a><br />
<br />
<h3 style='display: inline' id='i-enjoyed-this-talk-some-recipes-i-knew-'>I enjoyed this talk, some recipes I knew ...</h3><br />
<br />
<span>I enjoyed this talk, some recipes I knew already, others were new to me. The "line of sight" is my favourite, which I always tend to follow. I also liked the example where the speaker simplified a "complex" nested functions into two not-nested-if-statements. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=zdKHq9Xo4OY&amp;list=WL&amp;index=5'>www.youtube.com/watch?v=zdKHq9Xo4OY&amp;list=WL&amp;index=5</a><br />
<br />
<h3 style='display: inline' id='a-way-of-how-to-add-the-version-info-to-the-go-'>A way of how to add the version info to the Go ...</h3><br />
<br />
<span>A way of how to add the version info to the Go binary. ... I personally just hardcode the version number in version.go and update it there manually for each release. But with Go 1.24, I will try embedding it! <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://jerrynsh.com/3-easy-ways-to-add-version-flag-in-go/'>jerrynsh.com/3-easy-ways-to-add-version-flag-in-go/</a><br />
<br />
<h3 style='display: inline' id='in-other-words-using-tparallel-for-'>In other words, using t.Parallel() for ...</h3><br />
<br />
<span>In other words, using t.Parallel() for lightweight unit tests will likely make them slower.... <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://threedots.tech/post/go-test-parallelism/'>threedots.tech/post/go-test-parallelism/</a><br />
<br />
<h3 style='display: inline' id='neat-little-blog-post-showcasing-various-'>Neat little blog post, showcasing various ...</h3><br />
<br />
<span>Neat little blog post, showcasing various methods unsed for generic programming before of the introduction of generics. Only reflection wasn&#39;t listed. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://bitfieldconsulting.com/posts/generics'>bitfieldconsulting.com/posts/generics</a><br />
<br />
<h3 style='display: inline' id='the-smallest-thing-in-go-golang-'>The smallest thing in Go <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>The smallest thing in Go <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://bitfieldconsulting.com/posts/iota'>bitfieldconsulting.com/posts/iota</a><br />
<br />
<h3 style='display: inline' id='fun-with-defer-in-golang-i-did-t-know-that-'>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that ...</h3><br />
<br />
<span>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that a defer object can either be heap or stack allocated. And there are some rules for inlining, too.</span><br />
<br />
<a class='textlink' href='https://victoriametrics.com/blog/defer-in-go/'>victoriametrics.com/blog/defer-in-go/</a><br />
<br />
<h3 style='display: inline' id='what-i-like-about-go-is-that-it-is-still-'>What I like about Go is that it is still ...</h3><br />
<br />
<span>What I like about Go is that it is still possible to understand what&#39;s going on under the hood, whereas in JVM-based languages (for example) or dynamic languages, there are too many optimizations and abstractions. However, you don&#39;t need to know too much about how it works under the hood in Go (like memory management in C). It&#39;s just the fact that you can—you have a choice. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/goroutine-scheduler-revealed-youll'>blog.devtrovert.com/p/goroutine-scheduler-revealed-youll</a><br />
<br />
<h2 style='display: inline' id='march-2025'>March 2025</h2><br />
<br />
<h3 style='display: inline' id='television-has-somewhat-transformed-how-i-work-'>Television has somewhat transformed how I work ...</h3><br />
<br />
<span>Television has somewhat transformed how I work in the shell on a day-to-day basis. It is especially useful for me in navigating all the local Git repositories on my laptop. I have bound Ctrl+G in my shell for that now. <span class='inlinecode'>#television</span> <span class='inlinecode'>#tv</span> <span class='inlinecode'>#tool</span> <span class='inlinecode'>#shell</span></span><br />
<br />
<a class='textlink' href='https://github.com/alexpasmantier/television'>github.com/alexpasmantier/television</a><br />
<br />
<h3 style='display: inline' id='once-in-a-while-i-like-to-read-a-book-about-a-'>Once in a while, I like to read a book about a ...</h3><br />
<br />
<span>Once in a while, I like to read a book about a programming language I have been using for a while to find new tricks or to refresh and sharpen my knowledge about it. I just finished reading "Programming Ruby 3.3," and I must say this is my favorite Ruby book now. What makes this one so special is that it is quite recent and covers all the new features. <span class='inlinecode'>#ruby</span> <span class='inlinecode'>#programming</span> <span class='inlinecode'>#coding</span></span><br />
<br />
<a class='textlink' href='https://pragprog.com/titles/ruby5/programming-ruby-3-3-5th-edition/'>pragprog.com/titles/ruby5/programming-ruby-3-3-5th-edition/</a><br />
<br />
<h3 style='display: inline' id='as-you-may-have-noticed-i-like-to-share-on-'>As you may have noticed, I like to share on ...</h3><br />
<br />
<span>As you may have noticed, I like to share on Mastodon and LinkedIn all the technical things I find interesting, and this blog post is technically all about that. Having said that, I love these tiny side projects. They are so relaxing to work on! <span class='inlinecode'>#gos</span> <span class='inlinecode'>#golang</span> <span class='inlinecode'>#tool</span> <span class='inlinecode'>#programming</span> <span class='inlinecode'>#fun</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.gmi'>foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html'>foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html</a><br />
<br />
<h3 style='display: inline' id='personally-i-think-ai-llms-are-pretty-'>Personally, I think AI (LLMs) are pretty ...</h3><br />
<br />
<span>Personally, I think AI (LLMs) are pretty useful. But there&#39;s really some Hype around that. However, AI is about to stay - its not all hype</span><br />
<br />
<a class='textlink' href='https://unixdigest.com/articles/i-passionately-hate-hype-especially-the-ai-hype.html'>unixdigest.com/articles/i-passionately-hate-hype-especially-the-ai-hype.html</a><br />
<br />
<h3 style='display: inline' id='type-aliases-in-golang-soon-also-work-with-'>Type aliases in <span class='inlinecode'>#golang</span>, soon also work with ...</h3><br />
<br />
<span>Type aliases in <span class='inlinecode'>#golang</span>, soon also work with generics. It&#39;s an interesting feature, useful for refactorings and simplifications</span><br />
<br />
<a class='textlink' href='https://go.dev/blog/alias-names'>go.dev/blog/alias-names</a><br />
<br />
<h3 style='display: inline' id='perl-my-first-love-of-programming-'><span class='inlinecode'>#Perl</span>, my "first love" of programming ...</h3><br />
<br />
<span><span class='inlinecode'>#Perl</span>, my "first love" of programming languages. Still there, still use it here and then (but not as my primary language at the moment). And others do so as well, apparently. Which makes me happy! :-)</span><br />
<br />
<a class='textlink' href='https://dev.to/fa5tworm/why-perl-remains-indispensable-in-the-age-of-modern-programming-languages-2io0'>dev.to/fa5tworm/why-perl-remains-indis..-..e-of-modern-programming-languages-2io0</a><br />
<br />
<h3 style='display: inline' id='i-guess-there-are-valid-reasons-for-phttpdget-'>I guess there are valid reasons for phttpdget, ...</h3><br />
<br />
<span>I guess there are valid reasons for phttpdget, which I also don&#39;t know about? Maybe complexity and/or licensing of other tools. <span class='inlinecode'>#FreeBSD</span></span><br />
<br />
<a class='textlink' href='https://l33t.codes/2024/12/05/Updating-FreeBSD-and-Re-Inventing-the-Wheel/'>l33t.codes/2024/12/05/Updating-FreeBSD-and-Re-Inventing-the-Wheel/</a><br />
<br />
<h3 style='display: inline' id='this-is-one-of-the-reasons-why-i-like-'>This is one of the reasons why I like ...</h3><br />
<br />
<span>This is one of the reasons why I like terminal-based applications so much—they are usually more lightweight than GUI-based ones (and also more flexible).</span><br />
<br />
<a class='textlink' href='https://www.arp242.net/stupid-light.html'>www.arp242.net/stupid-light.html</a><br />
<br />
<h3 style='display: inline' id='advanced-concurrency-patterns-with-golang-'>Advanced Concurrency Patterns with <span class='inlinecode'>#Golang</span> ...</h3><br />
<br />
<span>Advanced Concurrency Patterns with <span class='inlinecode'>#Golang</span></span><br />
<br />
<a class='textlink' href='https://blogtitle.github.io/go-advanced-concurrency-patterns-part-1/'>blogtitle.github.io/go-advanced-concurrency-patterns-part-1/</a><br />
<br />
<h3 style='display: inline' id='sqlite-was-designed-as-an-tcl-extension-'><span class='inlinecode'>#SQLite</span> was designed as an <span class='inlinecode'>#TCL</span> extension. ...</h3><br />
<br />
<span><span class='inlinecode'>#SQLite</span> was designed as an <span class='inlinecode'>#TCL</span> extension. There are ~trillion SQLite databases in active use. SQLite heavily relies on <span class='inlinecode'>#TCL</span>: C code generation via mksqlite3c.tcl, C code isn&#39;t edited directly by the SQLite developers, and for testing , and for doc generation). The devs use a custom editor written in Tcl/Tk called "e" to edit the source! There&#39;s a custom versioning system Fossil, a custom chat-room written in Tcl/Tk!</span><br />
<br />
<a class='textlink' href='https://www.tcl-lang.org/community/tcl2017/assets/talk93/Paper.html'>www.tcl-lang.org/community/tcl2017/assets/talk93/Paper.html</a><br />
<br />
<h3 style='display: inline' id='git-provides-automatic-rendering-of-markdown-'>Git provides automatic rendering of Markdown ...</h3><br />
<br />
<span>Git provides automatic rendering of Markdown files, including README.md, in a repository’s root directory" .... so much junk now in LLM powered search engines.... <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span></span><br />
<br />
<h3 style='display: inline' id='these-are-some-neat-little-go-tips-linters-'>These are some neat little Go tips. Linters ...</h3><br />
<br />
<span>These are some neat little Go tips. Linters already tell you when you silently omit a function return value, though. The slice filter without allocation trick is nice and simple. And I agree that switch statements are preferable to if-else statements. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/go-ep5-avoid-contextbackground-make'>blog.devtrovert.com/p/go-ep5-avoid-contextbackground-make</a><br />
<br />
<h3 style='display: inline' id='this-is-a-great-introductory-blog-post-about-'>This is a great introductory blog post about ...</h3><br />
<br />
<span>This is a great introductory blog post about the Helix modal editor. It&#39;s also been my first choice for over a year now. I am really looking forward to the Steel plugin system, though. I don&#39;t think I need a lot of plugins, but one or two would certainly be on my wish list. <span class='inlinecode'>#HelixEditor</span> <span class='inlinecode'>#Helix</span></span><br />
<br />
<a class='textlink' href='https://felix-knorr.net/posts/2025-03-16-helix-review.html'>felix-knorr.net/posts/2025-03-16-helix-review.html</a><br />
<br />
<h3 style='display: inline' id='maps-in-go-under-the-hood-golang-'>Maps in Go under the hood <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Maps in Go under the hood <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://victoriametrics.com/blog/go-map/'>victoriametrics.com/blog/go-map/</a><br />
<br />
<h3 style='display: inline' id='i-found-that-working-on-multiple-side-projects-'>I found that working on multiple side projects ...</h3><br />
<br />
<span>I found that working on multiple side projects concurrently is better than concentrating on just one. This seems inefficient, but if you to lose motivation, you can temporarily switch to another one with full élan. Remember to stop starting and start finishing. This doesn&#39;t mean you should be working on 10+ side projects concurrently! Select your projects and commit to finishing them before starting the next thing. For example, my current limit of concurrent side projects is around five.</span><br />
<br />
<h3 style='display: inline' id='i-have-been-in-incidents-understandably-'>I have been in incidents. Understandably, ...</h3><br />
<br />
<span>I have been in incidents. Understandably, everyone wants the issue to be resolved as quickly and others want to know how long TTR will be. IMHO, providing no estimates at all is no solution either. So maybe give a rough estimate but clearly communicate that the estimate is rough and that X, Y, and Z can interfere, meaning there is a chance it will take longer to resolve the incident. Just my thought. What&#39;s yours?</span><br />
<br />
<a class='textlink' href='https://firehydrant.com/blog/hot-take-dont-provide-incident-resolution-estimates/'>firehydrant.com/blog/hot-take-dont-provide-incident-resolution-estimates/</a><br />
<br />
<h3 style='display: inline' id='i-dont-understand-what-it-is-certificates-are-'>I dont understand what it is. Certificates are ...</h3><br />
<br />
<span>I dont understand what it is. Certificates are so easy to monitor but still, expirations cause so many incidents. <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://securityboulevard.com/2024/10/dont-let-an-expired-certificate-cause-critical-downtime-prevent-outages-with-a-smart-clm/'>securityboulevard.com/2024/10/dont-let..-..time-prevent-outages-with-a-smart-clm/</a><br />
<br />
<h3 style='display: inline' id='don-t-just-blindly-trust-llms-i-recently-'>Don&#39;t just blindly trust LLMs. I recently ...</h3><br />
<br />
<span>Don&#39;t just blindly trust LLMs. I recently trusted an LLM, spent 1 hour debugging, and ultimately had to verify my assumption about <span class='inlinecode'>fcntl</span> behavior regarding inherited file descriptors in child processes manually with a C program, as the manual page wasn&#39;t clear to me. I could have done that immediately and I would have been done within 10 minutes. <span class='inlinecode'>#productivity</span> <span class='inlinecode'>#loss</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#programming</span> <span class='inlinecode'>#C</span></span><br />
<br />
<h2 style='display: inline' id='april-2025'>April 2025</h2><br />
<br />
<h3 style='display: inline' id='i-knew-about-any-being-equivalent-to-'>I knew about any being equivalent to ...</h3><br />
<br />
<span>I knew about any being equivalent to interface{} in <span class='inlinecode'>#Golang</span>, but wasn&#39;t aware, that it was introduced to Go because of the generics.</span><br />
<br />
<h3 style='display: inline' id='neat-summary-of-new-perl-features-per-'>Neat summary of new <span class='inlinecode'>#Perl</span> features per ...</h3><br />
<br />
<span>Neat summary of new <span class='inlinecode'>#Perl</span> features per release</span><br />
<br />
<a class='textlink' href='https://sheet.shiar.nl/perl'>sheet.shiar.nl/perl</a><br />
<br />
<h3 style='display: inline' id='errorsas-checks-for-the-error-type-whereas-'>errors.As() checks for the error type, whereas ...</h3><br />
<br />
<span>errors.As() checks for the error type, whereas errors.Is() checks for the exact error value. Interesting read about Errors in <span class='inlinecode'>#golang</span> - and there is also a cat meme in the middle of the blog post! And then, it continues with pointers to pointers to error values or how about a pointer to an empty interface?</span><br />
<br />
<a class='textlink' href='https://adrianlarion.com/golang-error-handling-demystified-errors-is-errors-as-errors-unwrap-custom-errors-and-more/'>adrianlarion.com/golang-error-handling..-..-errors-unwrap-custom-errors-and-more/</a><br />
<br />
<h3 style='display: inline' id='good-stuff-10-years-of-functional-options-and-'>Good stuff: 10 years of functional options and ...</h3><br />
<br />
<span>Good stuff: 10 years of functional options and key lessons Learned along the way <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.bytesizego.com/blog/10-years-functional-options-golang'>www.bytesizego.com/blog/10-years-functional-options-golang</a><br />
<br />
<h3 style='display: inline' id='i-had-some-fun-with-freebsd-bhyve-and-'>I had some fun with <span class='inlinecode'>#FreeBSD</span>, <span class='inlinecode'>#Bhyve</span> and ...</h3><br />
<br />
<span>I had some fun with <span class='inlinecode'>#FreeBSD</span>, <span class='inlinecode'>#Bhyve</span> and <span class='inlinecode'>#Rocky</span> <span class='inlinecode'>#Linux</span>. Not just for fun, also for science and profit! <span class='inlinecode'>#homelab</span> <span class='inlinecode'>#selfhosting</span> <span class='inlinecode'>#self</span>-hosting</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi'>foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html</a><br />
<br />
<h3 style='display: inline' id='the-moment-your-blog-receives-prs-for-typo-'>The moment your blog receives PRs for typo ...</h3><br />
<br />
<span>The moment your blog receives PRs for typo corrections, you notice, that people are actually reading and care about your stuff :-) <span class='inlinecode'>#blog</span> <span class='inlinecode'>#personal</span> <span class='inlinecode'>#tech</span></span><br />
<br />
<h3 style='display: inline' id='one-thing-not-mentioned-is-that-openrsync-s-'>One thing not mentioned is that <span class='inlinecode'>#OpenRsync</span>&#39;s ...</h3><br />
<br />
<span>One thing not mentioned is that <span class='inlinecode'>#OpenRsync</span>&#39;s origin is the <span class='inlinecode'>#OpenBSD</span> project (at least as far as I am aware! Correct me if I am wrong :-) )! <span class='inlinecode'>#openbsd</span> <span class='inlinecode'>#rsync</span> <span class='inlinecode'>#macos</span> <span class='inlinecode'>#openrsync</span></span><br />
<br />
<a class='textlink' href='https://derflounder.wordpress.com/2025/04/06/rsync-replaced-with-openrsync-on-macos-sequoia/'>derflounder.wordpress.com/2025/04/06/r..-..laced-with-openrsync-on-macos-sequoia/</a><br />
<br />
<h3 style='display: inline' id='this-is-an-interesting-elixir-pipes-operator-'>This is an interesting <span class='inlinecode'>#Elixir</span> pipes operator ...</h3><br />
<br />
<span>This is an interesting <span class='inlinecode'>#Elixir</span> pipes operator experiment in <span class='inlinecode'>#Ruby</span>. <span class='inlinecode'>#Python</span> has also been experimenting with such an operator. Raku (not mentioned in the linked article) already has the <span class='inlinecode'>==&gt;</span> sequence operator, of course (which can also can be used backwards <span class='inlinecode'>&lt;==</span> - who has doubted? :-) ). <span class='inlinecode'>#syntax</span> <span class='inlinecode'>#codegolf</span> <span class='inlinecode'>#fun</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#RakuLang</span></span><br />
<br />
<a class='textlink' href='https://zverok.space/blog/2024-11-16-elixir-pipes.html'>zverok.space/blog/2024-11-16-elixir-pipes.html</a><br />
<br />
<h3 style='display: inline' id='the-story-of-how-my-favorite-golang-book-was-'>The story of how my favorite <span class='inlinecode'>#Golang</span> book was ...</h3><br />
<br />
<span>The story of how my favorite <span class='inlinecode'>#Golang</span> book was written:</span><br />
<br />
<a class='textlink' href='https://www.thecoder.cafe/p/100-go-mistakes'>www.thecoder.cafe/p/100-go-mistakes</a><br />
<br />
<h3 style='display: inline' id='these-are-my-personal-book-notes-from-daniel-'>These are my personal book notes from Daniel ...</h3><br />
<br />
<span>These are my personal book notes from Daniel Pink&#39;s "When: The Scientific Secrets of Perfect Timing." The notes are for me (to improve happiness and productivity). You still need to read the whole book to get your own insights, but maybe the notes will be useful for you as well. <span class='inlinecode'>#blog</span> <span class='inlinecode'>#book</span> <span class='inlinecode'>#booknotes</span> <span class='inlinecode'>#productivity</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-04-19-when-book-notes.gmi'>foo.zone/gemfeed/2025-04-19-when-book-notes.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-04-19-when-book-notes.html'>foo.zone/gemfeed/2025-04-19-when-book-notes.html</a><br />
<br />
<h3 style='display: inline' id='i-certainly-learned-a-lot-reading-this-llm-'>I certainly learned a lot reading this <span class='inlinecode'>#llm</span> ...</h3><br />
<br />
<span>I certainly learned a lot reading this <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://simonwillison.net/2025/Mar/11/using-llms-for-code/'>simonwillison.net/2025/Mar/11/using-llms-for-code/</a><br />
<br />
<h3 style='display: inline' id='writing-indempotent-bash-scripts-'>Writing indempotent <span class='inlinecode'>#Bash</span> scripts ...</h3><br />
<br />
<span>Writing indempotent <span class='inlinecode'>#Bash</span> scripts</span><br />
<br />
<a class='textlink' href='https://arslan.io/2019/07/03/how-to-write-idempotent-bash-scripts/'>arslan.io/2019/07/03/how-to-write-idempotent-bash-scripts/</a><br />
<br />
<h3 style='display: inline' id='regarding-ai-for-code-generation-you-should-'>Regarding <span class='inlinecode'>#AI</span> for code generation. You should ...</h3><br />
<br />
<span>Regarding <span class='inlinecode'>#AI</span> for code generation. You should be at least a bit curious and exleriement a bit. You don&#39;t have to use it if you don&#39;t see fit purpose.</span><br />
<br />
<a class='textlink' href='https://registerspill.thorstenball.com/p/they-all-use-it?publication_id=1543843&amp;post_id=151910861&amp;isFreemail=true&amp;r=2n9ive&amp;triedRedirect=true'>registerspill.thorstenball.com/p/they-..-..email=true&amp;r=2n9ive&amp;triedRedirect=true</a><br />
<br />
<h3 style='display: inline' id='i-like-the-rocky-metaphor-and-this-post-also-'>I like the Rocky metaphor. And this post also ...</h3><br />
<br />
<span>I like the Rocky metaphor. And this post also reflects my thoughts on coding. <span class='inlinecode'>#llm</span> <span class='inlinecode'>#ai</span> <span class='inlinecode'>#software</span></span><br />
<br />
<a class='textlink' href='https://cekrem.github.io/posts/coding-as-craft-going-back-to-the-old-gym/'>cekrem.github.io/posts/coding-as-craft-going-back-to-the-old-gym/</a><br />
<br />
<h2 style='display: inline' id='may-2025'>May 2025</h2><br />
<br />
<h3 style='display: inline' id='there-s-now-also-a-fish-shell-edition-of-my-'>There&#39;s now also a <span class='inlinecode'>#Fish</span> shell edition of my ...</h3><br />
<br />
<span>There&#39;s now also a <span class='inlinecode'>#Fish</span> shell edition of my <span class='inlinecode'>#tmux</span> helper scripts: <span class='inlinecode'>#fishshell</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.gmi'>foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html'>foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html</a><br />
<br />
<h3 style='display: inline' id='i-loved-this-talk-it-s-about-how-you-can-'>I loved this talk. It&#39;s about how you can ...</h3><br />
<br />
<span>I loved this talk. It&#39;s about how you can create your own <span class='inlinecode'>#Linux</span> <span class='inlinecode'>#container</span> without Docker, using less than 100 lines of shell code without Docker or Podman and co. - Why is this talk useful? If you understand how <span class='inlinecode'>#containers</span> work "under the hood," it becomes easier to make design decisions, write your own tools, or debug production systems. I also recommend his training courses, of which I visited one once.</span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=4RUiVAlJE2w'>www.youtube.com/watch?v=4RUiVAlJE2w</a><br />
<br />
<h3 style='display: inline' id='some-unexpected-golang-stuff-ppl-say-that-'>Some unexpected <span class='inlinecode'>#golang</span> stuff, ppl say, that ...</h3><br />
<br />
<span>Some unexpected <span class='inlinecode'>#golang</span> stuff, ppl say, that Go is a simple language. IMHO the devil is in the details.</span><br />
<br />
<a class='textlink' href='https://unexpected-go.com/'>unexpected-go.com/</a><br />
<br />
<h3 style='display: inline' id='with-the-advent-of-ai-and-llms-i-have-observed-'>With the advent of AI and LLMs, I have observed ...</h3><br />
<br />
<span>With the advent of AI and LLMs, I have observed that being able to type quickly has become even more important for engineers. Previously, fast typing wasn&#39;t as crucial when coding, as most of the time was spent thinking or navigating through the code. However, with LLMs, you find yourself typing much more frequently. That&#39;s an unexpected personal win for me, as I recently learned fast touch typing: <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.gmi'>foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.html'>foo.zone/gemfeed/2024-08-05-typing-127.1-words-per-minute.html</a><br />
<br />
<h3 style='display: inline' id='for-science-fun-and-profit-i-set-up-a-'>For science, fun and profit, I set up a ...</h3><br />
<br />
<span>For science, fun and profit, I set up a <span class='inlinecode'>#WireGuard</span> mesh network for my <span class='inlinecode'>#FreeBSD</span>, <span class='inlinecode'>#OpenBSD</span>, <span class='inlinecode'>#RockyLinux</span> and <span class='inlinecode'>#Kubernetes</span> <span class='inlinecode'>#homelab</span>: There&#39;s also a mesh generator, which I wrote in <span class='inlinecode'>#Ruby</span>. <span class='inlinecode'>#k3s</span> <span class='inlinecode'>#linux</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#k3s</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.gmi'>foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html</a><br />
<br />
<h3 style='display: inline' id='ever-wondered-about-the-hung-task-linux-'>Ever wondered about the hung task Linux ...</h3><br />
<br />
<span>Ever wondered about the hung task Linux messages on a busy server? Every case is unique, and there is no standard approach to debug them, but here it gets a bit demystified: <span class='inlinecode'>#linux</span> <span class='inlinecode'>#kernel</span></span><br />
<br />
<a class='textlink' href='https://blog.cloudflare.com/searching-for-the-cause-of-hung-tasks-in-the-linux-kernel/'>blog.cloudflare.com/searching-for-the-cause-of-hung-tasks-in-the-linux-kernel/</a><br />
<br />
<h3 style='display: inline' id='a-bit-of-fun-the-fortran-hating-gateway--'>A bit of <span class='inlinecode'>#fun</span>: The FORTRAN hating gateway ― ...</h3><br />
<br />
<span>A bit of <span class='inlinecode'>#fun</span>: The FORTRAN hating gateway ― Andreas Zwinkau</span><br />
<br />
<a class='textlink' href='https://beza1e1.tuxen.de/lore/fortran_hating_gateway.html'>beza1e1.tuxen.de/lore/fortran_hating_gateway.html</a><br />
<br />
<h3 style='display: inline' id='so-golang-was-invented-while-engineers-at-'>So, Golang was invented while engineers at ...</h3><br />
<br />
<span>So, Golang was invented while engineers at Google waited for C++ to compile. Here I am, waiting a long time for Java to compile...</span><br />
<br />
<h3 style='display: inline' id='i-couldn-t-do-without-here-docs-if-they-did-'>I couldn&#39;t do without here-docs. If they did ...</h3><br />
<br />
<span>I couldn&#39;t do without here-docs. If they did not exist, I would need to find another field and pursue a career there. <span class='inlinecode'>#bash</span> <span class='inlinecode'>#sh</span> <span class='inlinecode'>#shell</span></span><br />
<br />
<a class='textlink' href='https://rednafi.com/misc/heredoc_headache/'>rednafi.com/misc/heredoc_headache/</a><br />
<br />
<h3 style='display: inline' id='i-started-using-computers-as-a-kid-on-ms-dos-'>I started using computers as a kid on MS-DOS ...</h3><br />
<br />
<span>I started using computers as a kid on MS-DOS and mainly used Norton Commander to navigate the file system in order to start games. Later, I became more interested in computing in general and switched to Linux, but there was no NC. However, there was GNU Midnight Commander, which I still use regularly to this day. It&#39;s absolutely worth checking out, even in the modern day. <span class='inlinecode'>#tools</span> <span class='inlinecode'>#opensource</span></span><br />
<br />
<a class='textlink' href='https://en.wikipedia.org/wiki/Midnight_Commander'>en.wikipedia.org/wiki/Midnight_Commander</a><br />
<br />
<h3 style='display: inline' id='thats-interesting-running-android-in-'>Thats interesting, running <span class='inlinecode'>#Android</span> in ...</h3><br />
<br />
<span>Thats interesting, running <span class='inlinecode'>#Android</span> in <span class='inlinecode'>#Kubernetes</span></span><br />
<br />
<a class='textlink' href='https://ku.bz/Gs4-wpK5h'>ku.bz/Gs4-wpK5h</a><br />
<br />
<h3 style='display: inline' id='before-wiping-the-pre-installed-windows-11-'>Before wiping the pre-installed <span class='inlinecode'>#Windows</span> 11 ...</h3><br />
<br />
<span>Before wiping the pre-installed <span class='inlinecode'>#Windows</span> 11 Pro on my new Beelink mini PC, I tested <span class='inlinecode'>#WSL2</span> with <span class='inlinecode'>#Fedora</span> <span class='inlinecode'>#Linux</span>. I compiled my pet project, I/O Riot NG (ior), which requires many system libraries, including <span class='inlinecode'>#BPF</span>. I’m impressed—everything works just like on native Fedora, and my tool runs and traces I/O syscalls with BPF out of the box. I might would prefer now Windows over MacOS if I had to chose between those two for work.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/ior'>codeberg.org/snonux/ior</a><br />
<br />
<h3 style='display: inline' id='some-might-hate-me-saying-this-but-didnt-'>Some might hate me saying this, but didnt ...</h3><br />
<br />
<span>Some might hate me saying this, but didnt <span class='inlinecode'>#systemd</span> solve the problem of a shared /tmp directory by introducing PrivateTmp?? but yes why did it have to go that way...</span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/140968/tmp-should-not-exist/'>www.osnews.com/story/140968/tmp-should-not-exist/</a><br />
<br />
<h3 style='display: inline' id='wouldn-t-still-do-that-even-with-100-test-'>Wouldn&#39;t still do that, even with 100% test ...</h3><br />
<br />
<span>Wouldn&#39;t still do that, even with 100% test coverage, LT and integration tests, unless theres an exception the business relies on <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://medium.com/openclassrooms-product-design-and-engineering/do-not-deploy-on-friday-92b1b46ebfe6'>medium.com/openclassrooms-product-desi..-..g/do-not-deploy-on-friday-92b1b46ebfe6</a><br />
<br />
<h3 style='display: inline' id='some-neat-slice-tricks-for-go-golang-'>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span> ...</h3><br />
<br />
<span>Some neat slice tricks for Go: <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.devtrovert.com/p/12-slice-tricks-to-enhance-your-go'>blog.devtrovert.com/p/12-slice-tricks-to-enhance-your-go</a><br />
<br />
<h3 style='display: inline' id='i-understand-that-kubernetes-is-not-for-'>I understand that Kubernetes is not for ...</h3><br />
<br />
<span>I understand that Kubernetes is not for everyone, but it still seems to be the new default for everything newly built. Despite the fact that Kubernetes is complex to maintain and use, there is still a lot of SRE/DevOps talent out there who have it on their CVs, which contributes significantly to the supportability of the infrastructure and the applications running on it. This way, you don&#39;t have to teach every new engineer your "own way" infrastructure. It&#39;s like a standard language of infrastructure that many people speak. However, Kubernetes should not be the default solution for everything, in my opinion. <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#k8s</span></span><br />
<br />
<a class='textlink' href='https://www.gitpod.io/blog/we-are-leaving-kubernetes'>www.gitpod.io/blog/we-are-leaving-kubernetes</a><br />
<br />
<h2 style='display: inline' id='june-2025'>June 2025</h2><br />
<br />
<h3 style='display: inline' id='some-great-advices-will-try-out-some-of-them-'>Some great advices, will try out some of them! ...</h3><br />
<br />
<span>Some great advices, will try out some of them! <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://endler.dev/2025/best-programmers/'>endler.dev/2025/best-programmers/</a><br />
<br />
<h3 style='display: inline' id='in-golang-values-are-actually-copied-when-'>In <span class='inlinecode'>#Golang</span>, values are actually copied when ...</h3><br />
<br />
<span>In <span class='inlinecode'>#Golang</span>, values are actually copied when assigned (boxed) into an interface. That can have performance impact.</span><br />
<br />
<a class='textlink' href='https://goperf.dev/01-common-patterns/interface-boxing/'>goperf.dev/01-common-patterns/interface-boxing/</a><br />
<br />
<h3 style='display: inline' id='this-is-a-great-little-tutorial-for-searching-'>This is a great little tutorial for searching ...</h3><br />
<br />
<span>This is a great little tutorial for searching in the <span class='inlinecode'>#HelixEditor</span> <span class='inlinecode'>#editor</span> <span class='inlinecode'>#coding</span></span><br />
<br />
<a class='textlink' href='https://helix-editor-tutorials.com/tutorials/using-helix-global-search/'>helix-editor-tutorials.com/tutorials/using-helix-global-search/</a><br />
<br />
<h3 style='display: inline' id='the-mov-instruction-of-a-cpu-is-turing-'>The mov instruction of a CPU is turing ...</h3><br />
<br />
<span>The mov instruction of a CPU is turing complete. And theres an implementation of <span class='inlinecode'>#Doom</span> only using mov, it renders one frame per 7 hours! <span class='inlinecode'>#fun</span></span><br />
<br />
<a class='textlink' href='https://beza1e1.tuxen.de/articles/accidentally_turing_complete.html'>beza1e1.tuxen.de/articles/accidentally_turing_complete.html</a><br />
<br />
<h3 style='display: inline' id='i-removed-the-social-media-profile-from-my-'>I removed the social media profile from my ...</h3><br />
<br />
<span>I removed the social media profile from my GrapheneOS phone. Originally, I created a separate profile just for social media to avoid using it too often. But I noticed that I switched to it too frequently. Not having social media within reach is probably the best option. <span class='inlinecode'>#socialmedia</span> <span class='inlinecode'>#sm</span> <span class='inlinecode'>#distractions</span></span><br />
<br />
<h3 style='display: inline' id='so-want-a-real-recent-unix-use-aix-macos-'>So want a "real" recent UNIX? Use AIX! <span class='inlinecode'>#macos</span> ...</h3><br />
<br />
<span>So want a "real" recent UNIX? Use AIX! <span class='inlinecode'>#macos</span> <span class='inlinecode'>#unix</span> <span class='inlinecode'>#aix</span></span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/141633/apples-macos-unix-certification-is-a-lie/'>www.osnews.com/story/141633/apples-macos-unix-certification-is-a-lie/</a><br />
<br />
<h3 style='display: inline' id='this-episode-i-think-is-kind-of-an-eye-opener-'>This episode, I think, is kind of an eye-opener ...</h3><br />
<br />
<span>This episode, I think, is kind of an eye-opener for me personally. I knew, that AI is there to stay, but you better should now start playing with your pet projects, otherwise your performance reviews will be awkward in a year or two from now, when you are expected to use AI for your daily work. <span class='inlinecode'>#ai</span> <span class='inlinecode'>#llm</span> <span class='inlinecode'>#coding</span> <span class='inlinecode'>#programming</span></span><br />
<br />
<a class='textlink' href='https://changelog.com/friends/96'>changelog.com/friends/96</a><br />
<br />
<h3 style='display: inline' id='my-openbsd-blog-setup-got-mentioned-in-the-'>My <span class='inlinecode'>#OpenBSD</span> blog setup got mentioned in the ...</h3><br />
<br />
<span>My <span class='inlinecode'>#OpenBSD</span> blog setup got mentioned in the BSDNow.tv Podcast (In the Feedback section) :-) <span class='inlinecode'>#BSD</span> <span class='inlinecode'>#podcast</span> <span class='inlinecode'>#runbsd</span></span><br />
<br />
<a class='textlink' href='https://www.bsdnow.tv/614'>www.bsdnow.tv/614</a><br />
<br />
<h3 style='display: inline' id='golang-is-the-best-when-it-comes-to-agentic-'><span class='inlinecode'>#Golang</span> is the best when it comes to agentic ...</h3><br />
<br />
<span><span class='inlinecode'>#Golang</span> is the best when it comes to agentic coding: <span class='inlinecode'>#llm</span></span><br />
<br />
<a class='textlink' href='https://lucumr.pocoo.org/2025/6/12/agentic-coding/'>lucumr.pocoo.org/2025/6/12/agentic-coding/</a><br />
<br />
<h3 style='display: inline' id='where-zsh-is-better-than-bash-'>Where <span class='inlinecode'>#zsh</span> is better than <span class='inlinecode'>#bash</span> ...</h3><br />
<br />
<span>Where <span class='inlinecode'>#zsh</span> is better than <span class='inlinecode'>#bash</span></span><br />
<br />
<a class='textlink' href='https://www.arp242.net/why-zsh.html'>www.arp242.net/why-zsh.html</a><br />
<br />
<h3 style='display: inline' id='i-really-enjoyed-this-talk-about-obscure-go-'>I really enjoyed this talk about obscure Go ...</h3><br />
<br />
<span>I really enjoyed this talk about obscure Go optimizations. None of it is really standard and can change from one version of Go to another, though. <span class='inlinecode'>#golang</span> <span class='inlinecode'>#talk</span></span><br />
<br />
<a class='textlink' href='https://www.youtube.com/watch?v=rRtihWOcaLI'>www.youtube.com/watch?v=rRtihWOcaLI</a><br />
<br />
<h3 style='display: inline' id='commenting-your-regular-expression-is-generally-'>Commenting your regular expression is generally ...</h3><br />
<br />
<span>Commenting your regular expression is generally a good advice! Works pretty well as described in the article not just in <span class='inlinecode'>#Ruby</span>, but also in <span class='inlinecode'>#Perl</span> (@Perl), <span class='inlinecode'>#RakuLang</span>, ...</span><br />
<br />
<a class='textlink' href='https://thoughtbot.com/blog/comment-your-regular-expressions'>thoughtbot.com/blog/comment-your-regular-expressions</a><br />
<br />
<h3 style='display: inline' id='you-have-to-make-a-decision-for-yourself-but-'>You have to make a decision for yourself, but ...</h3><br />
<br />
<span>You have to make a decision for yourself, but generally, work smarter (and faster—but keep the quality)! About 40 hours <span class='inlinecode'>#productivity</span> <span class='inlinecode'>#work</span> <span class='inlinecode'>#workload</span></span><br />
<br />
<a class='textlink' href='https://thesquareplanet.com/blog/about-40-hours/'>thesquareplanet.com/blog/about-40-hours/</a><br />
<br />
<h3 style='display: inline' id='100-go-mistakes-and-how-to-avoid-them-is-one-'>"100 Go Mistakes and How to Avoid Them" is one ...</h3><br />
<br />
<span>"100 Go Mistakes and How to Avoid Them" is one of my favorite <span class='inlinecode'>#Golang</span> books. Julia Evans also stumbled across some issues she&#39;d learned from this book. The book itself is an absolute must for every Gopher (or someone who wants to become one!)</span><br />
<br />
<a class='textlink' href='https://jvns.ca/blog/2024/08/06/go-structs-copied-on-assignment/'>jvns.ca/blog/2024/08/06/go-structs-copied-on-assignment/</a><br />
<br />
<h3 style='display: inline' id='the-ruby-data-class-seems-quite-helpful-'>The <span class='inlinecode'>#Ruby</span> Data class seems quite helpful ...</h3><br />
<br />
<span>The <span class='inlinecode'>#Ruby</span> Data class seems quite helpful</span><br />
<br />
<a class='textlink' href='https://allaboutcoding.ghinda.com/example-of-value-objects-using-rubys-data-class'>allaboutcoding.ghinda.com/example-of-value-objects-using-rubys-data-class</a><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2025-01-01-posts-from-october-to-december-2024.html'>2025-01-01 Posts from October to December 2024</a><br />
<a class='textlink' href='./2025-07-01-posts-from-january-to-june-2025.html'>2025-07-01 Posts from January to June 2025 (You are currently reading this)</a><br />
<a class='textlink' href='./2026-01-01-posts-from-july-to-december-2025.html'>2026-01-01 Posts from July to December 2025</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Task Samurai: An agentic coding learning experiment</title>
        <link href="https://foo.zone/gemfeed/2025-06-22-task-samurai.html" />
        <id>https://foo.zone/gemfeed/2025-06-22-task-samurai.html</id>
        <updated>2025-06-22T20:00:51+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Task Samurai is a fast terminal interface for Taskwarrior written in Go using the Bubble Tea framework. It displays your tasks in a table and allows you to manage them without leaving your keyboard.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='task-samurai-an-agentic-coding-learning-experiment'>Task Samurai: An agentic coding learning experiment</h1><br />
<br />
<span class='quote'>Published at 2025-06-22T20:00:51+03:00</span><br />
<br />
<a href='./task-samurai/logo.png'><img alt='Task Samurai Logo' title='Task Samurai Logo' src='./task-samurai/logo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#task-samurai-an-agentic-coding-learning-experiment'>Task Samurai: An agentic coding learning experiment</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ ⇢ <a href='#why-does-this-exist'>Why does this exist?</a></li>
<li>⇢ ⇢ <a href='#how-it-works'>How it works</a></li>
<li>⇢ <a href='#where-and-how-to-get-it'>Where and how to get it</a></li>
<li>⇢ <a href='#lessons-learned-from-building-task-samurai-with-agentic-coding'>Lessons learned from building Task Samurai with agentic coding</a></li>
<li>⇢ ⇢ <a href='#developer-workflow'>Developer workflow</a></li>
<li>⇢ ⇢ <a href='#how-it-went'>How it went</a></li>
<li>⇢ ⇢ <a href='#what-went-wrong'>What went wrong</a></li>
<li>⇢ ⇢ <a href='#patterns-that-helped'>Patterns that helped</a></li>
<li>⇢ ⇢ <a href='#what-i-learned-using-agentic-coding'>What I learned using agentic coding</a></li>
<li>⇢ ⇢ <a href='#how-much-time-did-i-save'>how much time did I save?</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>Task Samurai is a fast terminal interface for Taskwarrior written in Go using the Bubble Tea framework. It displays your tasks in a table and allows you to manage them without leaving your keyboard.</span><br />
<br />
<a class='textlink' href='https://taskwarrior.org'>https://taskwarrior.org</a><br />
<a class='textlink' href='https://github.com/charmbracelet/bubbletea'>https://github.com/charmbracelet/bubbletea</a><br />
<br />
<h3 style='display: inline' id='why-does-this-exist'>Why does this exist?</h3><br />
<br />
<span>I wanted to tinker with agentic coding. This project was implemented entirely using OpenAI Codex. (After this blog post was published, I also used the Claude Code CLI.)</span><br />
<br />
<ul>
<li>I wanted a faster UI for Taskwarrior than other options, like Vit, which is Python-based.</li>
<li>I wanted something built with Bubble Tea, but I never had time to dive deep into it.</li>
<li>I wanted to build a toy project (like Task Samurai) first, before tackling the big ones, to get started with agentic coding.</li>
</ul><br />
<a class='textlink' href='https://openai.com/codex/'>https://openai.com/codex/</a><br />
<br />
<span>I&#39;ve been curious about agentic coding for a while and wanted to see what it&#39;s actually like to build something with it. So I gave it a go (no pun intended).</span><br />
<br />
<h3 style='display: inline' id='how-it-works'>How it works</h3><br />
<br />
<span>Task Samurai invokes the <span class='inlinecode'>task</span> command (that&#39;s the original Taskwarrior CLI command) to read and modify tasks. The tasks are displayed in a Bubble Tea table, where each row represents a task. Hotkeys trigger Taskwarrior commands such as starting, completing or annotating tasks. The UI refreshes automatically after each action, so the table is always up to date.</span><br />
<br />
<a href='./task-samurai/screenshot.png'><img alt='Task Samurai Screenshot' title='Task Samurai Screenshot' src='./task-samurai/screenshot.png' /></a><br />
<br />
<h2 style='display: inline' id='where-and-how-to-get-it'>Where and how to get it</h2><br />
<br />
<span>Go to:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/tasksamurai'>https://codeberg.org/snonux/tasksamurai</a><br />
<br />
<span>And follow the <span class='inlinecode'>README.md</span>!</span><br />
<br />
<h2 style='display: inline' id='lessons-learned-from-building-task-samurai-with-agentic-coding'>Lessons learned from building Task Samurai with agentic coding</h2><br />
<br />
<h3 style='display: inline' id='developer-workflow'>Developer workflow</h3><br />
<br />
<span>I was trying out OpenAI Codex because I regularly run out of Claude Code CLI (another agentic coding tool I am currently trying out) credits (it still happens!), but Codex was still available to me. So, I took the opportunity to push agentic coding a bit further with another platform.</span><br />
<br />
<span>I didn&#39;t really love the web UI you have to use for Codex, as I usually live in the terminal. But this is all I have for Codex for now, and I thought I&#39;d give it a try regardless. The web UI is simple and pretty straightforward. There&#39;s also a Codex CLI one could use directly in the terminal, but I didn&#39;t get it working. I will try again soon.</span><br />
<br />
<span class='quote'>Update: Codex CLI now works for me, after OpenAI released a new version!</span><br />
<br />
<span>For every task given to Codex, it spins up its own container. From there, you can drill down and watch what it is doing. At the end, the result (in the form of a code diff) will be presented. From there, you can make suggestions about what else to change in the codebase. What I found inconvenient is that for every additional change, there&#39;s an overhead because Codex has to spin up a container and bootstrap the entire development environment again, which adds extra delay. That could be eliminated by setting up predefined custom containers, but that feature still seems somewhat limited.</span><br />
<br />
<span>Once satisfied, you can ask Codex to create a GitHub PR (too bad only GitHub is supported and no other Git hosters); from there, you can merge it and then pull it to your local laptop or workstation to test the changes again. I found myself looping a lot around the Codex UI, GitHub PRs, and local checkouts. </span><br />
<br />
<h3 style='display: inline' id='how-it-went'>How it went</h3><br />
<br />
<span>Task Samurai&#39;s codebase came together quickly: the entire Git history spans from June 19 to 22, 2025, culminating in 179 commits:</span><br />
<br />
<ul>
<li>June 19: Scaffolded the Go boilerplate, set up tests, integrated the Bubble Tea UI framework, and got the first table views showing up.</li>
<li>June 20: (The big one—120 commits!) Added hotkeys, colourized tasks, annotation support, undo/redo, and, for fun, fireworks on quit (which never worked and got removed at a later point). This is where most of the bugs, merges, and fast-paced changes happen.</li>
<li>June 21: Refined searching, theming, and column sizing and documented all those hotkeys. Numerous tweaks to make the UI cleaner and more user-friendly.</li>
<li>June 22: Final touches—added screenshots, polished the logo, fixed module paths… and then it was a wrap.</li>
</ul><br />
<span>Most big breakthroughs (and bug introductions) came during that middle day of intense iteration. The latter stages were all about smoothing out the rough edges.</span><br />
<br />
<span>It&#39;s worth noting that I worked on it in the evenings when I had some free time, as I also had to fit in my regular work and family commitments during the day. So, I didn&#39;t spend full working days on this project.</span><br />
<br />
<h3 style='display: inline' id='what-went-wrong'>What went wrong</h3><br />
<br />
<span>Going agentic isn&#39;t all smooth. Here are the hiccups I ran into, plus a few lessons:</span><br />
<br />
<ul>
<li>Merge Floods: Every minor feature or fix existed on its branch, so merging was a constant process. It kept progress flowing but also drowned the committed history in noise and the occasional conflict. I found this to be an issue with OpenAI&#39;s Codex in particular. Not so much with other agentic coding tools like Claude Code CLI (not covered in this blog post.)</li>
<li>Fixes on fixes: Features like "fireworks on exit" had chains of "fix exit," "fix cell selection," etc. Sometimes, new additions introduced bugs that needed rapid patching.</li>
</ul><br />
<h3 style='display: inline' id='patterns-that-helped'>Patterns that helped</h3><br />
<br />
<span>Despite the chaos, a few strategies kept things moving:</span><br />
<br />
<ul>
<li>Scaffolding First: I started with the basic table UI and command wrappers, then layered on features—never the other way around.</li>
<li>Tiny PRs: Small, atomic merges meant feedback came fast (and so did fixes).</li>
<li>Tests Matter: A solid base of unit tests for task manipulations kept things from breaking entirely when experimenting.</li>
<li>Live Documentation: Documentation, such as the README, is updated regularly to reflect all the hotkey and feature changes.</li>
</ul><br />
<span>Maybe a better approach would have been to design the whole application from scratch before letting Codix do any of the coding. I will try that with my next toy project.</span><br />
<br />
<h3 style='display: inline' id='what-i-learned-using-agentic-coding'>What I learned using agentic coding</h3><br />
<br />
<span>Stepping into agentic coding with Codex as my "pair programmer" was a big shift. I learned a lot—not just about automating code generation, but also about how you have to tightly steer, guide, and audit every line as things move at high speed. I must admit, I sometimes lost track of what all the generated code was actually doing. But as the features seemed to work after a few iterations, I was satisfied—which is a bit concerning. Imagine if I approved a PR for a production-grade deployment without fully understanding what it was doing (and not a toy project like in this post).</span><br />
<br />
<h3 style='display: inline' id='how-much-time-did-i-save'>how much time did I save?</h3><br />
<br />
<span>Did it buy me speed? </span><br />
<br />
<ul>
<li>Say each commit takes Codex 5 minutes to generate, and you need to review/guide 179 commits = about _6 hours of active development_.</li>
<li>If you coded it all yourself, including all the bug fixes, features, design, and documentation, you might spend _10–20 hours_.</li>
<li>That&#39;s a couple of days of potential savings—and I am by no means an expert in agentic coding, since this was my first completed agentic coding project.</li>
</ul><br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>Building Task Samurai with agentic coding was a wild ride—rapid feature growth, countless fast fixes, and more merge commits I&#39;d expected. Keep the iterations short (or maybe in my next experiment, much larger, with better and more complete design before generating a single line of code), keep tests and documentation concise, and review and refine for final polish at the end. Even with the bumps along the way, shipping a terminal UI in days instead of weeks is a neat little showcase vibe coding.</span><br />
<br />
<span>Am I an agentic coding expert now? I don&#39;t think so. There are still many things to learn, and the landscape is constantly evolving.</span><br />
<br />
<span>While working on Task Samurai, there were times I missed manual coding and the satisfaction that comes from writing every line yourself, debugging issues manually, and crafting solutions from scratch. However, this is the direction in which the industry seems to be shifting, unfortunately. If applied correctly, AI will boost performance, and if you don&#39;t use AI, your next performance review may be awkward.</span><br />
<br />
<span>Personally, I am not sure whether I like where the industry is going with agentic coding. I love "traditional" coding, and with agentic coding you operate at a higher level and don&#39;t interact directly with code as often, which I would miss. I think that in the future, designing, reviewing, and being able to read and understand code will be more important than writing code by hand.</span><br />
<br />
<span>Do you have any thoughts on that? I hope, I am partially wrong at least.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2025-08-05-local-coding-llm-with-ollama.html'>2025-08-05 Local LLM for Coding with Ollama on macOS</a><br />
<a class='textlink' href='./2025-06-22-task-samurai.html'>2025-06-22 Task Samurai: An agentic coding learning experiment (You are currently reading this)</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>'A Monk's Guide to Happiness' book notes</title>
        <link href="https://foo.zone/gemfeed/2025-06-07-a-monks-guide-to-happiness-book-notes.html" />
        <id>https://foo.zone/gemfeed/2025-06-07-a-monks-guide-to-happiness-book-notes.html</id>
        <updated>2025-06-07T10:30:11+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>These are my personal book notes from Gelong Thubten's 'A Monk's Guide to Happiness: Meditation in the 21st century.' They are for my own reference, but I hope they might be useful to you as well.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='a-monk-s-guide-to-happiness-book-notes'>"A Monk&#39;s Guide to Happiness" book notes</h1><br />
<br />
<span class='quote'>Published at 2025-06-07T10:30:11+03:00</span><br />
<br />
<span>These are my personal book notes from Gelong Thubten&#39;s "A Monk&#39;s Guide to Happiness: Meditation in the 21st century." They are for my own reference, but I hope they might be useful to you as well.</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#a-monk-s-guide-to-happiness-book-notes'>"A Monk&#39;s Guide to Happiness" book notes</a></li>
<li>⇢ <a href='#understanding-happiness'>Understanding Happiness</a></li>
<li>⇢ <a href='#the-role-of-meditation'>The Role of Meditation</a></li>
<li>⇢ <a href='#managing-thoughts-and-emotions'>Managing Thoughts and Emotions</a></li>
<li>⇢ <a href='#practice-and-discipline'>Practice and Discipline</a></li>
<li>⇢ <a href='#perspectives-on-relationships-and-interactions'>Perspectives on Relationships and Interactions</a></li>
<li>⇢ <a href='#reflective-questions'>Reflective Questions</a></li>
<li>⇢ <a href='#miscellaneous-guidelines'>Miscellaneous Guidelines</a></li>
</ul><br />
<h2 style='display: inline' id='understanding-happiness'>Understanding Happiness</h2><br />
<br />
<ul>
<li>Happiness is a skill we can train. </li>
<li>Happiness is not about accomplishing goals, as that would be in the future. </li>
<li>Feel free now. No urge about past and future. </li>
<li>We can learn to produce our own happiness independently of physical needs. When we walk in a park, how do we feel? We can train to reproduce that feeling independently. </li>
</ul><br />
<h2 style='display: inline' id='the-role-of-meditation'>The Role of Meditation</h2><br />
<br />
<ul>
<li>Meditation is not about clearing your mind. A busy mind has nothing to do with interfering with your meditation.</li>
<li>Our problem is that we need to detect that awareness. Meditation connects us with awareness. Awareness is freedom.</li>
<li>We can let the mind be and don&#39;t care about the thoughts. It will have benefits for your life. It will protect you from all kinds of stress.</li>
<li>Better meditate with open eyes so you don&#39;t associate it with the dark. You will also be able to be in a meditation state of mind outside of the meditation session.</li>
<li>Have a baseline for time to build up discipline.</li>
<li>We don&#39;t need to do anything about stress, just take a step back.</li>
</ul><br />
<h2 style='display: inline' id='managing-thoughts-and-emotions'>Managing Thoughts and Emotions</h2><br />
<br />
<ul>
<li>Our flow of emotions is really just habits. That can be changed through training, e.g., meditation training.</li>
<li>A part of the mind recognises that we are sad or angry. That part is not sad or angry by itself, obviously. So we can escape to that part of the mind, be the observer, and not draw in the constant flow of emotions and thoughts. </li>
<li>Let the front and back doors of your house open, and let the thoughts come in and leave. Just don&#39;t serve them tea. This once said, a great Zen master.</li>
<li>Thoughts are friends and not enemies. </li>
<li>Thoughts help the meditation as they make us notice that we wandered off, and therefore, we strengthen the reflection.</li>
</ul><br />
<h2 style='display: inline' id='practice-and-discipline'>Practice and Discipline</h2><br />
<br />
<ul>
<li>The importance of habits to practice mindfulness. Bring mindfulness into the daily practice.</li>
<li>Integrating short moments of mindfulness during the day is the fast track to happiness. Start off with small tasks, e.g. while washing your hands.</li>
<li>Have many small doses of mindfulness and don&#39;t prolong as otherwise, your mind will revolt.</li>
<li>Have a small moment of mindfulness when you wake up and go to sleep.</li>
<li>Practice staying fully present in an uncomfortable situation and without judgement.</li>
<li>Don&#39;t become two persons who never meet: the meditator and the not meditator. So integrate mindfulness during the day too.</li>
</ul><br />
<h2 style='display: inline' id='perspectives-on-relationships-and-interactions'>Perspectives on Relationships and Interactions</h2><br />
<br />
<ul>
<li>Who is the opponent? The other person. The things he said or our reactions to things? Forgiveness is a high form of compassion.</li>
<li>Understand the suffering of the person who "hurt" us. Where is the aggressor really coming from?</li>
<li>People who are stressed or unhappy do and say things they wouldn&#39;t have said have done otherwise. Acting under anger is like being influenced by alcohol.</li>
<li>People don&#39;t have a masterplan to destroy others, even if it seems so. They are under strong bad influence by themselves. Something terrible happened to them. Revenge makes no sense.</li>
<li>Be grateful for people "trying" to hurt you as they help you to practice your path.</li>
</ul><br />
<h2 style='display: inline' id='reflective-questions'>Reflective Questions</h2><br />
<br />
<ul>
<li>Why do I do all the things I do? What do I try to achieve?</li>
<li>What am I doing about that? </li>
<li>Is it working?</li>
<li>What are the real causes of happiness and suffering?</li>
<li>What about meditation? How does that address the situation?</li>
</ul><br />
<h2 style='display: inline' id='miscellaneous-guidelines'>Miscellaneous Guidelines</h2><br />
<br />
<ul>
<li>Posture is important as the mind and body are connected.</li>
<li>Don&#39;t use music, so you don&#39;t rely on music to change your state of mind. Similar regular guided meditation. Guided meditation is good for learning a technique, but you should not rely on another voice.</li>
<li>You are not trying to relax. Relaxing and trying are two different things.</li>
<li>When you love everything, even the bad things happening to you, then you are invincible.</li>
<li>Happiness is all in your mind. As if you flip a switch there.</li>
<li>Digging for answers will never end. It will always cause more material to dig.</li>
</ul><br />
<span>If happiness is a mental issue. Clearly, the best time is spent training your mind in your free time and don&#39;t always be busy with other things. E.g. meditation, or think about the benefits of meditation. All that we do in our free time is search for happiness. Are the things we do actually working? There is always something around the corner...</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other book notes of mine are:</span><br />
<br />
<a class='textlink' href='./2025-11-02-the-courage-to-be-disliked-book-notes.html'>2025-11-02 &#39;The Courage To Be Disliked&#39; book notes</a><br />
<a class='textlink' href='./2025-06-07-a-monks-guide-to-happiness-book-notes.html'>2025-06-07 &#39;A Monk&#39;s Guide to Happiness&#39; book notes (You are currently reading this)</a><br />
<a class='textlink' href='./2025-04-19-when-book-notes.html'>2025-04-19 &#39;When: The Scientific Secrets of Perfect Timing&#39; book notes</a><br />
<a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 &#39;Staff Engineer&#39; book notes</a><br />
<a class='textlink' href='./2024-07-07-the-stoic-challenge-book-notes.html'>2024-07-07 &#39;The Stoic Challenge&#39; book notes</a><br />
<a class='textlink' href='./2024-05-01-slow-productivity-book-notes.html'>2024-05-01 &#39;Slow Productivity&#39; book notes</a><br />
<a class='textlink' href='./2023-11-11-mind-management-book-notes.html'>2023-11-11 &#39;Mind Management&#39; book notes</a><br />
<a class='textlink' href='./2023-07-17-career-guide-and-soft-skills-book-notes.html'>2023-07-17 &#39;Software Developers Career Guide and Soft Skills&#39; book notes</a><br />
<a class='textlink' href='./2023-05-06-the-obstacle-is-the-way-book-notes.html'>2023-05-06 &#39;The Obstacle is the Way&#39; book notes</a><br />
<a class='textlink' href='./2023-04-01-never-split-the-difference-book-notes.html'>2023-04-01 &#39;Never split the difference&#39; book notes</a><br />
<a class='textlink' href='./2023-03-16-the-pragmatic-programmer-book-notes.html'>2023-03-16 &#39;The Pragmatic Programmer&#39; book notes</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</title>
        <link href="https://foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html" />
        <id>https://foo.zone/gemfeed/2025-05-11-f3s-kubernetes-with-freebsd-part-5.html</id>
        <updated>2025-05-11T11:35:57+03:00, last updated Thu 15 Jan 19:30:46 EET 2026</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the fifth blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-5-wireguard-mesh-network'>f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</h1><br />
<br />
<span class='quote'>Published at 2025-05-11T11:35:57+03:00, last updated Thu 15 Jan 19:30:46 EET 2026</span><br />
<br />
<span>This is the fifth blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</span><br />
<br />
<span>I will post a new entry every month or so (there are too many other side projects for more frequent updates — I bet you can understand).</span><br />
<br />
<span class='quote'>This post has been updated to include two roaming clients (<span class='inlinecode'>earth</span> - Fedora laptop, <span class='inlinecode'>pixel7pro</span> - Android phone) that connect to the mesh via the internet gateways. The updated content is integrated throughout the post.</span><br />
<br />
<span>These are all the posts so far:</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<span class='quote'>ChatGPT generated logo.</span><br />
<br />
<span>Let&#39;s begin...</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-5-wireguard-mesh-network'>f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ ⇢ <a href='#expected-traffic-flow'>Expected traffic flow</a></li>
<li>⇢ <a href='#deciding-on-wireguard'>Deciding on WireGuard</a></li>
<li>⇢ <a href='#base-configuration'>Base configuration</a></li>
<li>⇢ ⇢ <a href='#freebsd'>FreeBSD</a></li>
<li>⇢ ⇢ <a href='#rocky-linux'>Rocky Linux</a></li>
<li>⇢ ⇢ <a href='#openbsd'>OpenBSD</a></li>
<li>⇢ <a href='#wireguard-configuration'>WireGuard configuration</a></li>
<li>⇢ ⇢ <a href='#example-wg0conf'>Example <span class='inlinecode'>wg0.conf</span></a></li>
<li>⇢ ⇢ <a href='#nat-traversal-and-keepalive'>NAT traversal and keepalive</a></li>
<li>⇢ ⇢ <a href='#preshared-key'>Preshared key</a></li>
<li>⇢ <a href='#mesh-network-generator'>Mesh network generator</a></li>
<li>⇢ ⇢ <a href='#wireguardmeshgeneratoryaml'><span class='inlinecode'>wireguardmeshgenerator.yaml</span></a></li>
<li>⇢ ⇢ <a href='#wireguardmeshgeneratorrb-overview'><span class='inlinecode'>wireguardmeshgenerator.rb</span> overview</a></li>
<li>⇢ <a href='#invoking-the-mesh-network-generator'>Invoking the mesh network generator</a></li>
<li>⇢ ⇢ <a href='#generating-the-wg0conf-files-and-keys'>Generating the <span class='inlinecode'>wg0.conf</span> files and keys</a></li>
<li>⇢ ⇢ <a href='#installing-the-wg0conf-files'>Installing the <span class='inlinecode'>wg0.conf</span> files</a></li>
<li>⇢ ⇢ <a href='#re-generating-mesh-and-installing-the-wg0conf-files-again'>Re-generating mesh and installing the <span class='inlinecode'>wg0.conf</span> files again</a></li>
<li>⇢ ⇢ <a href='#setting-up-roaming-clients'>Setting up roaming clients</a></li>
<li>⇢ <a href='#adding-ipv6-support-to-the-mesh'>Adding IPv6 support to the mesh</a></li>
<li>⇢ ⇢ <a href='#ipv6-addressing-scheme'>IPv6 addressing scheme</a></li>
<li>⇢ ⇢ <a href='#updating-the-mesh-generator-for-ipv6'>Updating the mesh generator for IPv6</a></li>
<li>⇢ ⇢ <a href='#ipv6-nat-on-openbsd-gateways'>IPv6 NAT on OpenBSD gateways</a></li>
<li>⇢ ⇢ <a href='#manual-openbsd-interface-configuration'>Manual OpenBSD interface configuration</a></li>
<li>⇢ ⇢ <a href='#verifying-dual-stack-connectivity'>Verifying dual-stack connectivity</a></li>
<li>⇢ ⇢ <a href='#benefits-of-dual-stack'>Benefits of dual-stack</a></li>
<li>⇢ <a href='#happy-wireguard-ing'>Happy WireGuard-ing</a></li>
<li>⇢ <a href='#managing-roaming-client-tunnels'>Managing Roaming Client Tunnels</a></li>
<li>⇢ ⇢ <a href='#manual-gateway-failover-configuration'>Manual gateway failover configuration</a></li>
<li>⇢ ⇢ <a href='#starting-and-stopping-on-earth-fedora-laptop'>Starting and stopping on earth (Fedora laptop)</a></li>
<li>⇢ ⇢ <a href='#starting-and-stopping-on-pixel7pro-android-phone'>Starting and stopping on pixel7pro (Android phone)</a></li>
<li>⇢ ⇢ <a href='#verifying-connectivity'>Verifying connectivity</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>By default, traffic within my home LAN, including traffic inside a k3s cluster, is not encrypted. While it resides in the "secure" home LAN, adopting a zero-trust policy means encryption is still preferable to ensure confidentiality and security. So we decide to secure all the traffic of all f3s participating hosts by building a mesh network:</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-5/wireguard-full-mesh-with-roaming.svg'><img alt='WireGuard mesh network topology' title='WireGuard mesh network topology' src='./f3s-kubernetes-with-freebsd-part-5/wireguard-full-mesh-with-roaming.svg' /></a><br />
<br />
<span>The mesh network consists of eight infrastructure hosts and two roaming clients:</span><br />
<br />
<span>Infrastructure hosts (full mesh):</span><br />
<br />
<ul>
<li><span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, and <span class='inlinecode'>f2</span> are the FreeBSD base hosts in my home LAN</li>
<li><span class='inlinecode'>r0</span>, <span class='inlinecode'>r1</span>, and <span class='inlinecode'>r2</span> are the Rocky Linux Bhyve VMs running on the FreeBSD hosts</li>
<li><span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span> are two OpenBSD systems running on the internet (as mentioned in the first blog of this series—these systems are already built; in fact, this very blog is served by those OpenBSD systems)</li>
</ul><br />
<span>oaming clients (gateway-only connections):</span><br />
<br />
<ul>
<li><span class='inlinecode'>earth</span> is my Fedora laptop (192.168.2.200) which connects only to the internet gateways for remote access</li>
<li><span class='inlinecode'>pixel7pro</span> is my Android phone (192.168.2.201) which routes all traffic through the VPN when activated</li>
</ul><br />
<span>As we can see from the diagram, the eight infrastructure hosts form a true full-mesh network, where every host has a VPN tunnel to every other host. The benefit is that we do not need to route traffic through intermediate hosts (significantly simplifying the routing configuration). However, the downside is that there is some overhead in configuring and managing all the tunnels. The roaming clients take a simpler approach—they only connect to the two internet-facing gateways (<span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span>), which is sufficient for remote access and internet connectivity.</span><br />
<br />
<span>For simplicity, we also establish VPN tunnels between <span class='inlinecode'>f0 &lt;-&gt; r0</span>, <span class='inlinecode'>f1 &lt;-&gt; r1</span>, and <span class='inlinecode'>f2 &lt;-&gt; r2</span>. Technically, this wouldn&#39;t be strictly required since the VMs <span class='inlinecode'>rN</span> are running on the hosts <span class='inlinecode'>fN</span>, and no network traffic is leaving the box. However, it simplifies the configuration as we don&#39;t have to account for exceptions, and we are going to automate the mesh network configuration anyway (read on).</span><br />
<br />
<h3 style='display: inline' id='expected-traffic-flow'>Expected traffic flow</h3><br />
<br />
<span>The traffic is expected to flow between the host groups through the mesh network as follows:</span><br />
<br />
<span>nfrastructure mesh traffic:</span><br />
<br />
<ul>
<li><span class='inlinecode'>fN &lt;-&gt; rN</span>: The traffic between the FreeBSD hosts and the Rocky Linux VMs will be routed through the VPN tunnels for persistent storage. In a later post in this series, we will set up an NFS server on the <span class='inlinecode'>fN</span> hosts.</li>
<li><span class='inlinecode'>fN &lt;-&gt; blowfish,fishfinger</span>: The traffic between the FreeBSD hosts and the OpenBSD host <span class='inlinecode'>blowfish,fishfinger</span> will be routed through the VPN tunnels for management. We may want to log in via the internet to set it up remotely. The VPN tunnel will also be used for monitoring purposes.</li>
<li><span class='inlinecode'>rN &lt;-&gt; blowfish,fishfinger</span>: The traffic between the Rocky Linux VMs and the OpenBSD host <span class='inlinecode'>blowfish,fishfinger</span> will be routed through the VPN tunnels for usage traffic. Since k3s will be running on the <span class='inlinecode'>rN</span> hosts, the OpenBSD servers will route the traffic through <span class='inlinecode'>relayd</span> to the services running in Kubernetes.</li>
<li><span class='inlinecode'>fN &lt;-&gt; fM</span>: The traffic between the FreeBSD hosts may be later used for data replication for the NFS storage.</li>
<li><span class='inlinecode'>rN &lt;-&gt; rM</span>: The traffic between the Rocky Linux VMs will later be used by the k3s cluster itself, as every <span class='inlinecode'>rN</span> will be a Kubernetes worker node.</li>
<li><span class='inlinecode'>blowfish &lt;-&gt; fishfinger</span>: The traffic between the OpenBSD hosts isn&#39;t strictly required for this setup, but I set it up anyway for future use cases.</li>
</ul><br />
<span>oaming client traffic:</span><br />
<br />
<ul>
<li><span class='inlinecode'>earth,pixel7pro &lt;-&gt; blowfish,fishfinger</span>: The roaming clients connect exclusively to the two internet gateways. All traffic from these clients (0.0.0.0/0) is routed through the VPN, providing secure internet access and the ability to reach services running in the mesh (via the gateways). The gateways use NAT to allow roaming clients to access the internet using the gateway&#39;s public IP address. The roaming clients cannot be reached by the LAN hosts—they are client-only and initiate all connections.</li>
</ul><br />
<span>We won&#39;t cover all the details in this blog post, as we only focus on setting up the Mesh network in this blog post. Subsequent posts in this series will cover the other details.</span><br />
<br />
<h2 style='display: inline' id='deciding-on-wireguard'>Deciding on WireGuard</h2><br />
<br />
<span>I have decided to use WireGuard as the VPN technology for this purpose.</span><br />
<br />
<span>WireGuard is a lightweight, modern, and secure VPN protocol designed for simplicity, speed, and strong cryptography. It is an excellent choice due to its minimal codebase, ease of configuration, high performance, and robust security, utilizing state-of-the-art encryption standards. WireGuard is supported on various operating systems, and its implementations are compatible with each other. Therefore, establishing WireGuard VPN tunnels between FreeBSD, Linux, and OpenBSD is seamless. This cross-platform availability makes it suitable for setups like the one described in this blog series.</span><br />
<br />
<span>We could have used Tailscale for an easy to set up and manage the WireGuard network, but the benefits of creating our own mesh network are:</span><br />
<br />
<ul>
<li>Learning about WireGuard configuration details</li>
<li>Have full control over the setup</li>
<li>Don&#39;t rely on an external provider like Tailscale (even if some of the components are open-source)</li>
<li>Have even more fun along the way</li>
<li>WireGuard is easy to configure on my target operating systems and, therefore, easier to maintain in the long run.</li>
<li>There are no official Tailscale packages available for OpenBSD and FreeBSD. However, getting Tailscale running on these systems is still possible, though some tinkering would be required. Instead, we use that tinkering time to set up WireGuard tunnels ourselves.</li>
</ul><br />
<a class='textlink' href='https://en.wikipedia.org/wiki/WireGuard'>https://en.wikipedia.org/wiki/WireGuard</a><br />
<a class='textlink' href='https://www.wireguard.com/'>https://www.wireguard.com/</a><br />
<a class='textlink' href='https://tailscale.com/'>https://tailscale.com/</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-5/wireguard.svg'><img alt='WireGuard Logo' title='WireGuard Logo' src='./f3s-kubernetes-with-freebsd-part-5/wireguard.svg' /></a><br />
<br />
<h2 style='display: inline' id='base-configuration'>Base configuration</h2><br />
<br />
<span>In the following, we prepare the base configuration for the WireGuard mesh network. We will use a similar configuration on all participating hosts, with the exception of the host IP addresses and the private keys.</span><br />
<br />
<h3 style='display: inline' id='freebsd'>FreeBSD</h3><br />
<br />
<span>On the FreeBSD hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>, similar as last time, first, we bring the system up to date:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas freebsd-update fetch
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas shutdown -r now
..
..
paul@f0:~ % doas pkg update
paul@f0:~ % doas pkg upgrade
paul@f0:~ % reboot
</pre>
<br />
<span>Next, we install <span class='inlinecode'>wireguard-tools</span> and configure the WireGuard service:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install wireguard-tools
paul@f0:~ % doas sysrc wireguard_interfaces=wg0
wireguard_interfaces:  -&gt; wg0
paul@f0:~ % doas sysrc wireguard_enable=YES
wireguard_enable:  -&gt; YES
paul@f0:~ % doas mkdir -p /usr/local/etc/wireguard
paul@f0:~ % doas touch /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
paul@f0:~ % doas service wireguard start
paul@f0:~ % doas wg show
interface: wg0
  public key: L+V9o0fNYkMVKNqsX7spBzD/9oSvxM/C7ZCZX1jLO3Q=
  private key: (hidden)
  listening port: <font color="#000000">20246</font>
</pre>
<br />
<span>We now have the WireGuard up and running, but it is not yet in any functional configuration. We will come back to that later.</span><br />
<br />
<span>Next, we add all the participating WireGuard IPs to the <span class='inlinecode'>hosts</span> file. This is only convenience, so we don&#39;t have to manage an external DNS server for this:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % cat &lt;&lt;END | doas tee -a /etc/hosts

<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.130</font> f0.wg0 f0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.131</font> f1.wg0 f1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.132</font> f2.wg0 f2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.120</font> r0.wg0 r0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.121</font> r1.wg0 r1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.122</font> r2.wg0 r2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">130</font> f0.wg0 f0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">131</font> f1.wg0 f1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">132</font> f2.wg0 f2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">120</font> r0.wg0 r0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">121</font> r1.wg0 r1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">122</font> r2.wg0 r2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org
END
</pre>
<br />
<span>As you can see, <span class='inlinecode'>192.168.1.0/24</span> is the network used in my LAN (with the <span class='inlinecode'>fN</span> and <span class='inlinecode'>rN</span> hosts) and <span class='inlinecode'>192.168.2.0/24</span> is the network used for the WireGuard mesh network. The <span class='inlinecode'>wg0</span> interface will be used for all WireGuard traffic.</span><br />
<br />
<h3 style='display: inline' id='rocky-linux'>Rocky Linux</h3><br />
<br />
<span>We bring the Rocky Linux VMs up to date as well with the following:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] dnf update -y
[root@r0 ~] reboot
</pre>
<br />
<span>Next, we prepare WireGuard on them. Same as on the FreeBSD hosts, we will only prepare WireGuard without any useful configuration yet:</span><br />
<span>	</span><br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] dnf install -y wireguard-tools
[root@r0 ~] mkdir -p /etc/wireguard
[root@r0 ~] touch /etc/wireguard/wg<font color="#000000">0</font>.conf
[root@r0 ~] systemctl <b><u><font color="#000000">enable</font></u></b> wg-quick@wg0.service
[root@r0 ~] systemctl start wg-quick@wg0.service
[root@r0 ~] systemctl disable firewalld
</pre>
<br />
<span>We also update the <span class='inlinecode'>hosts</span> file accordingly:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] cat &lt;&lt;END &gt;&gt;/etc/hosts

<font color="#000000">192.168</font>.<font color="#000000">1.130</font> f0 f0.lan f0.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.131</font> f1 f1.lan f1.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.132</font> f2 f2.lan f2.lan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.130</font> f0.wg0 f0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.131</font> f1.wg0 f1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.132</font> f2.wg0 f2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.120</font> r0.wg0 r0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.121</font> r1.wg0 r1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.122</font> r2.wg0 r2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">130</font> f0.wg0 f0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">131</font> f1.wg0 f1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">132</font> f2.wg0 f2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">120</font> r0.wg0 r0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">121</font> r1.wg0 r1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">122</font> r2.wg0 r2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org
END
</pre>
<br />
<span>Unfortunately, the SELinux policy on Rocky Linux blocks WireGuard&#39;s operation. By making the <span class='inlinecode'>wireguard_t</span> domain permissive using <span class='inlinecode'>semanage permissive -a wireguard_t</span>, SELinux will no longer enforce restrictions for WireGuard, allowing it to work as intended:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] dnf install -y policycoreutils-python-utils
[root@r0 ~] semanage permissive -a wireguard_t
[root@r0 ~] reboot
</pre>
<br />
<a class='textlink' href='https://github.com/angristan/wireguard-install/discussions/499'>https://github.com/angristan/wireguard-install/discussions/499</a><br />
<br />
<h3 style='display: inline' id='openbsd'>OpenBSD</h3><br />
<br />
<span>Other than the FreeBSD and Rocky Linux hosts involved, my OpenBSD hosts (<span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span>, which are running at OpenBSD Amsterdam and Hetzner on the internet) have been running already for longer, so I can&#39;t provide you with the "from scratch" installation details here. In the following, we will only focus on the additional configuration needed to set up WireGuard:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish$ doas pkg_add wireguard-tools
blowfish$ doas mkdir /etc/wireguard
blowfish$ doas touch /etc/wireguard/wg<font color="#000000">0</font>.conf
blowsish$ cat &lt;&lt;END | doas tee /etc/hostname.wg0
inet <font color="#000000">192.168</font>.<font color="#000000">2.110</font> <font color="#000000">255.255</font>.<font color="#000000">255.0</font> NONE
up
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg<font color="#000000">0</font>.conf
END
</pre>
<br />
<span>Note that on <span class='inlinecode'>blowfish</span>, we configure <span class='inlinecode'>192.168.2.110</span> here in the <span class='inlinecode'>hostname.wg</span>, and on <span class='inlinecode'>fishfinger</span>, we configure <span class='inlinecode'>192.168.2.111</span>. Those are the IP addresses of the WireGuard interfaces on those hosts.</span><br />
<br />
<span>And here, we also update the <span class='inlinecode'>hosts</span> file accordingly:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish$ cat &lt;&lt;END | doas tee -a /etc/hosts

<font color="#000000">192.168</font>.<font color="#000000">2.130</font> f0.wg0 f0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.131</font> f1.wg0 f1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.132</font> f2.wg0 f2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.120</font> r0.wg0 r0.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.121</font> r1.wg0 r1.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.122</font> r2.wg0 r2.wg0.wan.buetow.org

<font color="#000000">192.168</font>.<font color="#000000">2.110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.200</font> earth.wg0 earth.wg0.wan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">2.201</font> pixel7pro.wg0 pixel7pro.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">130</font> f0.wg0 f0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">131</font> f1.wg0 f1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">132</font> f2.wg0 f2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">120</font> r0.wg0 r0.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">121</font> r1.wg0 r1.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">122</font> r2.wg0 r2.wg0.wan.buetow.org

fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font> blowfish.wg0 blowfish.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">111</font> fishfinger.wg0 fishfinger.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">200</font> earth.wg0 earth.wg0.wan.buetow.org
fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">201</font> pixel7pro.wg0 pixel7pro.wg0.wan.buetow.org
END
</pre>
<br />
<span>To enable roaming clients (like <span class='inlinecode'>earth</span> and <span class='inlinecode'>pixel7pro</span>) to access the internet through the VPN, we need to configure NAT on the OpenBSD gateways. This allows the roaming clients to use the gateway&#39;s public IP address for outbound traffic. We add the following to <span class='inlinecode'>/etc/pf.conf</span> on both <span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># NAT for WireGuard clients to access internet</font></i>
match out on vio0 from <font color="#000000">192.168</font>.<font color="#000000">2.0</font>/<font color="#000000">24</font> to any nat-to (vio0)

<i><font color="silver"># Allow inbound traffic on WireGuard interface</font></i>
pass <b><u><font color="#000000">in</font></u></b> on wg0

<i><font color="silver"># Allow all UDP traffic on WireGuard port</font></i>
pass <b><u><font color="#000000">in</font></u></b> inet proto udp from any to any port <font color="#000000">56709</font>
</pre>
<br />
<span>The NAT rule translates outgoing traffic from the WireGuard network (192.168.2.0/24) to the gateway&#39;s public IP. The firewall rules permit WireGuard traffic on the wg0 interface and UDP port 56709. After updating <span class='inlinecode'>/etc/pf.conf</span>, reload the firewall:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>blowfish$ doas pfctl -f /etc/pf.conf
</pre>
<br />
<h2 style='display: inline' id='wireguard-configuration'>WireGuard configuration</h2><br />
<br />
<span>So far, we have only started WireGuard on all participating hosts without any useful configuration. This means that no VPN tunnel has been established yet between any of the hosts.</span><br />
<br />
<h3 style='display: inline' id='example-wg0conf'>Example <span class='inlinecode'>wg0.conf</span></h3><br />
<br />
<span>Generally speaking, a <span class='inlinecode'>wg0.conf</span> looks like this (example from <span class='inlinecode'>f0</span> host):</span><br />
<br />
<pre>
[Interface]
# f0.wg0.wan.buetow.org
Address = 192.168.2.130
PrivateKey = **************************
ListenPort = 56709

[Peer]
# f1.lan.buetow.org as f1.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.131/32
Endpoint = 192.168.1.131:56709
# No KeepAlive configured

[Peer]
# f2.lan.buetow.org as f2.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.132/32
Endpoint = 192.168.1.132:56709
# No KeepAlive configured

[Peer]
# r0.lan.buetow.org as r0.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.120/32
Endpoint = 192.168.1.120:56709
# No KeepAlive configured

[Peer]
# r1.lan.buetow.org as r1.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.121/32
Endpoint = 192.168.1.121:56709
# No KeepAlive configured

[Peer]
# r2.lan.buetow.org as r2.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.122/32
Endpoint = 192.168.1.122:56709
# No KeepAlive configured

[Peer]
# blowfish.buetow.org as blowfish.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.110/32
Endpoint = 23.88.35.144:56709
PersistentKeepalive = 25

[Peer]
# fishfinger.buetow.org as fishfinger.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 192.168.2.111/32
Endpoint = 46.23.94.99:56709
PersistentKeepalive = 25
</pre>
<br />
<span>For roaming clients like <span class='inlinecode'>pixel7pro</span> (Android phone) or <span class='inlinecode'>earth</span> (Fedora laptop), the configuration looks different because they route all traffic through the VPN and only connect to the internet gateways:</span><br />
<br />
<pre>
[Interface]
# pixel7pro.wg0.wan.buetow.org
Address = 192.168.2.201
PrivateKey = **************************
ListenPort = 56709
DNS = 1.1.1.1, 8.8.8.8

[Peer]
# blowfish.buetow.org as blowfish.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = 23.88.35.144:56709
PersistentKeepalive = 25

[Peer]
# fishfinger.buetow.org as fishfinger.wg0.wan.buetow.org
PublicKey = **************************
PresharedKey = **************************
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = 46.23.94.99:56709
PersistentKeepalive = 25
</pre>
<br />
<span>Note the key differences for roaming clients:</span><br />
<ul>
<li><span class='inlinecode'>DNS</span> is configured to use external DNS servers (Cloudflare and Google)</li>
<li><span class='inlinecode'>AllowedIPs = 0.0.0.0/0, ::/0</span> routes all traffic (IPv4 and IPv6) through the VPN</li>
<li>Only two peers are configured (the internet gateways), not the full mesh</li>
<li><span class='inlinecode'>PersistentKeepalive = 25</span> is used for both peers to maintain NAT traversal</li>
</ul><br />
<span>Whereas there are two main sections. One is <span class='inlinecode'>[Interface]</span>, which configures the current host (here: <span class='inlinecode'>f0</span> or <span class='inlinecode'>pixel7pro</span>):</span><br />
<br />
<ul>
<li><span class='inlinecode'>Address</span>: Local virtual IP address on the WireGuard interface.</li>
<li><span class='inlinecode'>PrivateKey</span>: Private key for this node.</li>
<li><span class='inlinecode'>ListenPort</span>: Port on which this WireGuard interface listens for incoming connections.</li>
</ul><br />
<span>And in the following, there is one <span class='inlinecode'>[Peer]</span> section for every peer node on the mesh network:</span><br />
<br />
<ul>
<li><span class='inlinecode'>PublicKey</span>: The public key of the remote peer is used to authenticate their identity.</li>
<li><span class='inlinecode'>PresharedKey</span>: An optional symmetric key is used to enhance security (used in addition to PublicKey).</li>
<li><span class='inlinecode'>AllowedIPs</span>: IPs or subnets routed through this peer (traffic is allowed to/from these IPs).</li>
<li><span class='inlinecode'>Endpoint</span>: The public IP:port combination of the remote peer for connection.</li>
<li><span class='inlinecode'>PersistentKeepalive</span>: Keeps the tunnel alive by sending periodic packets; used for NAT traversal.</li>
</ul><br />
<h3 style='display: inline' id='nat-traversal-and-keepalive'>NAT traversal and keepalive</h3><br />
<br />
<span>As all participating hosts, except for <span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span> (which are on the internet), are behind a NAT gateway (my home router), we need to use <span class='inlinecode'>PersistentKeepalive</span> to establish and maintain the VPN tunnel from the LAN to the internet because:</span><br />
<br />
<span class='quote'>By default, WireGuard tries to be as silent as possible when not being used; it is not a chatty protocol. For the most part, it only transmits data when a peer wishes to send packets. When it&#39;s not being asked to send packets, it stops sending packets until it is asked again. In the majority of configurations, this works well. However, when a peer is behind NAT or a firewall, it might wish to be able to receive incoming packets even when it is not sending any packets. Because NAT and stateful firewalls keep track of "connections", if a peer behind NAT or a firewall wishes to receive incoming packets, he must keep the NAT/firewall mapping valid, by periodically sending keepalive packets. This is called persistent keepalives. When this option is enabled, a keepalive packet is sent to the server endpoint once every interval seconds. A sensible interval that works with a wide variety of firewalls is 25 seconds. Setting it to 0 turns the feature off, which is the default, since most users will not need this, and it makes WireGuard slightly more chatty. This feature may be specified by adding the PersistentKeepalive = field to a peer in the configuration file, or setting persistent-keepalive at the command line. If you don&#39;t need this feature, don&#39;t enable it. But if you&#39;re behind NAT or a firewall and you want to receive incoming connections long after network traffic has gone silent, this option will keep the "connection" open in the eyes of NAT.</span><br />
<br />
<span>That&#39;s why you see <span class='inlinecode'>PersistentKeepAlive = 25</span> in the <span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span> peer configurations. This means that every 25 seconds, a keep-alive signal is sent over the tunnel to maintain its connection. If the tunnel is not yet established, it will be created within 25 seconds latest.</span><br />
<br />
<span>Without this, we might never have a VPN tunnel open, as the systems in the LAN may not actively attempt to contact <span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span> on their own. In fact, the opposite would likely occur, with the traffic flowing inward instead of outward (this is beyond the scope of this blog post but will be covered in a later post in this series!).</span><br />
<br />
<h3 style='display: inline' id='preshared-key'>Preshared key</h3><br />
<br />
<span>In a WireGuard configuration, the PSK (preshared key) is an optional additional layer of symmetric encryption used alongside the standard public key cryptography. It is a shared secret known to both peers that enhances security by requiring an attacker to compromise both the private keys and the PSK to decrypt communication. While optional, using a PSK is better as it strengthens the cryptographic security, mitigating risks of potential vulnerabilities in the key exchange process.</span><br />
<br />
<span>So, because it&#39;s better, we are using it.</span><br />
<br />
<h2 style='display: inline' id='mesh-network-generator'>Mesh network generator</h2><br />
<br />
<span>Manually generating <span class='inlinecode'>wg0.conf</span> files for every peer in a mesh network setup is cumbersome because each peer requires its own unique public/private key pair and a preshared key for each VPN tunnel (resulting in 29 preshared keys for 8 hosts). This complexity scales almost exponentially with the number of peers as the relationships between all peers must be explicitly defined, including their unique configurations such as <span class='inlinecode'>AllowedIPs</span> and <span class='inlinecode'>Endpoint</span> and optional settings like <span class='inlinecode'>PersistentKeepalive</span>. Automating the process ensures consistency, reduces human error, saves considerable time, and allows for centralized management of configuration files.</span><br />
<br />
<span>Instead, a script can handle key generation, coordinate relationships, and generate all necessary configuration files simultaneously, making it scalable and far less error-prone.</span><br />
<br />
<span>I have written a Ruby script <span class='inlinecode'>wireguardmeshgenerator.rb</span> to do this for our purposes:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/wireguardmeshgenerator'>https://codeberg.org/snonux/wireguardmeshgenerator</a><br />
<br />
<span>I use Fedora Linux as my main driver on my personal Laptop, so the script was developed and tested only on Fedora Linux. However, it should also work on other Linux and Unix-like systems.</span><br />
<br />
<span>To set up the mesh generator on Fedora Linux, we run the following:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; git clone https://codeberg.org/snonux/wireguardmeshgenerator
&gt; cd ./wireguardmeshgenerator
&gt; bundle install
&gt; sudo dnf install -y wireguard-tools
</pre>
<br />
<span>This assumes that Ruby and the <span class='inlinecode'>bundler</span> gem are already installed. If not, refer to the docs of your distribution.</span><br />
<br />
<h3 style='display: inline' id='wireguardmeshgeneratoryaml'><span class='inlinecode'>wireguardmeshgenerator.yaml</span></h3><br />
<br />
<span>The file <span class='inlinecode'>wireguardmeshgenerator.yaml</span> configures the mesh generator script.</span><br />
<br />
<pre>
---
hosts:
  f0:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.130&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.130&#39;
      ipv6: &#39;fd42:beef:cafe:2::130&#39;
    exclude_peers:
      - earth
      - pixel7pro
  f1:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.131&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.131&#39;
      ipv6: &#39;fd42:beef:cafe:2::131&#39;
    exclude_peers:
      - earth
      - pixel7pro
  f2:
    os: FreeBSD
    ssh:
      user: paul
      conf_dir: /usr/local/etc/wireguard
      sudo_cmd: doas
      reload_cmd: service wireguard reload
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.132&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.132&#39;
      ipv6: &#39;fd42:beef:cafe:2::132&#39;
    exclude_peers:
      - earth
      - pixel7pro
  r0:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.120&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.120&#39;
      ipv6: &#39;fd42:beef:cafe:2::120&#39;
    exclude_peers:
      - earth
      - pixel7pro
  r1:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.121&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.121&#39;
      ipv6: &#39;fd42:beef:cafe:2::121&#39;
    exclude_peers:
      - earth
      - pixel7pro
  r2:
    os: Linux
    ssh:
      user: root
      conf_dir: /etc/wireguard
      sudo_cmd:
      reload_cmd: systemctl reload wg-quick@wg0.service
    lan:
      domain: &#39;lan.buetow.org&#39;
      ip: &#39;192.168.1.122&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.122&#39;
      ipv6: &#39;fd42:beef:cafe:2::122&#39;
    exclude_peers:
      - earth
      - pixel7pro
  blowfish:
    os: OpenBSD
    ssh:
      user: rex
      port: 2
      conf_dir: /etc/wireguard
      sudo_cmd: doas
      reload_cmd: sh /etc/netstart wg0
    internet:
      domain: &#39;buetow.org&#39;
      ip: &#39;23.88.35.144&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.110&#39;
      ipv6: &#39;fd42:beef:cafe:2::110&#39;
    exclude_peers:
      - earth
      - pixel7pro
  fishfinger:
    os: OpenBSD
    ssh:
      user: rex
      port: 2
      conf_dir: /etc/wireguard
      sudo_cmd: doas
      reload_cmd: sh /etc/netstart wg0
    internet:
      domain: &#39;buetow.org&#39;
      ip: &#39;46.23.94.99&#39;
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.111&#39;
      ipv6: &#39;fd42:beef:cafe:2::111&#39;
    exclude_peers:
      - earth
      - pixel7pro
  earth:
    os: Linux
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.200&#39;
      ipv6: &#39;fd42:beef:cafe:2::200&#39;
    exclude_peers:
      - f0
      - f1
      - f2
      - r0
      - r1
      - r2
      - pixel7pro
  pixel7pro:
    os: Android
    wg0:
      domain: &#39;wg0.wan.buetow.org&#39;
      ip: &#39;192.168.2.201&#39;
      ipv6: &#39;fd42:beef:cafe:2::201&#39;
    exclude_peers:
      - f0
      - f1
      - f2
      - r0
      - r1
      - r2
      - earth
</pre>
<br />
<span>The file specifies details such as SSH user settings, configuration directories, sudo or reload commands, and IP/domain assignments for both internal LAN-facing interfaces and WireGuard (<span class='inlinecode'>wg0</span>) interfaces. Each host is assigned specific roles, including internal participants and publicly accessible nodes with internet-facing IPs, enabling the creation of a fully connected mesh VPN.</span><br />
<br />
<span>Roaming clients: Note the <span class='inlinecode'>earth</span> and <span class='inlinecode'>pixel7pro</span> entries—these are configured differently from the infrastructure hosts. They have no <span class='inlinecode'>lan</span> or <span class='inlinecode'>internet</span> sections, which signals to the generator that they are roaming clients. The <span class='inlinecode'>exclude_peers</span> configuration ensures they only connect to the internet gateways (<span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span>) and are not reachable by LAN hosts. The generator automatically configures these clients with <span class='inlinecode'>AllowedIPs = 0.0.0.0/0, ::/0</span> to route all traffic through the VPN, includes DNS configuration (<span class='inlinecode'>1.1.1.1, 8.8.8.8</span>), and enables <span class='inlinecode'>PersistentKeepalive</span> for NAT traversal.</span><br />
<br />
<h3 style='display: inline' id='wireguardmeshgeneratorrb-overview'><span class='inlinecode'>wireguardmeshgenerator.rb</span> overview</h3><br />
<br />
<span>The <span class='inlinecode'>wireguardmeshgenerator.rb</span> script consists of the following base classes:</span><br />
<br />
<ul>
<li><span class='inlinecode'>KeyTool</span>: Manages WireGuard key generation and retrieval. It ensures the presence of public/private key pairs and preshared keys (PSKs). If keys are missing, it generates them using the <span class='inlinecode'>wg</span> tool. It provides methods to read the public/private keys and retrieve or generate a PSK for communication with a peer. The keys are stored in a temp directory on the system from where the generator is run.</li>
<li><span class='inlinecode'>PeerSnippet</span>: A <span class='inlinecode'>Struct</span> representing the configuration for a single WireGuard peer in the mesh. Based on the provided attributes and configuration, it generates the peer&#39;s WireGuard configuration, including public key, PSK, allowed IPs, endpoint, and keepalive settings.</li>
<li><span class='inlinecode'>WireguardConfig</span>: This function generates WireGuard configuration files for the specified host in the mesh network. It includes the <span class='inlinecode'>[Interface]</span> section for the host itself and the <span class='inlinecode'>[Peer]</span> sections for all other peers. It can also clean up generated files and directories and create the required directory structure for storing configuration files locally on the system from which the script is run.</li>
<li><span class='inlinecode'>InstallConfig</span>: Handles uploading, installing, and restarting the WireGuard service on remote hosts using SSH and SCP. It ensures the configuration file is uploaded to the remote machine, the necessary directories are present and correctly configured, and the WireGuard service reloads with the new configuration.</li>
</ul><br />
<span>At the end (if you want to see the code for the stuff listed above, go to the Git repo and have a look), we glue it all together in this block:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">begin</font></u></b>
  options = { hosts: [] }
  OptionParser.new <b><u><font color="#000000">do</font></u></b> |opts|
    opts.banner = <font color="#808080">'Usage: wireguardmeshgenerator.rb [options]'</font>
    opts.on(<font color="#808080">'--generate'</font>, <font color="#808080">'Generate Wireguard configs'</font>) <b><u><font color="#000000">do</font></u></b>
      options[:generate] = <b><u><font color="#000000">true</font></u></b>
    <b><u><font color="#000000">end</font></u></b>
    opts.on(<font color="#808080">'--install'</font>, <font color="#808080">'Install Wireguard configs'</font>) <b><u><font color="#000000">do</font></u></b>
      options[:install] = <b><u><font color="#000000">true</font></u></b>
    <b><u><font color="#000000">end</font></u></b>
    opts.on(<font color="#808080">'--clean'</font>, <font color="#808080">'Clean Wireguard configs'</font>) <b><u><font color="#000000">do</font></u></b>
      options[:clean] = <b><u><font color="#000000">true</font></u></b>
    <b><u><font color="#000000">end</font></u></b>
    opts.on(<font color="#808080">'--hosts=HOSTS'</font>, <font color="#808080">'Comma separated hosts to configure'</font>) <b><u><font color="#000000">do</font></u></b> |hosts|
      options[:hosts] = hosts.split(<font color="#808080">','</font>)
    <b><u><font color="#000000">end</font></u></b>
  <b><u><font color="#000000">end</font></u></b>.parse!

  conf = YAML.load_file(<font color="#808080">'wireguardmeshgenerator.yaml'</font>).freeze
  conf[<font color="#808080">'hosts'</font>].keys.select { options[:hosts].empty? || options[:hosts].<b><u><font color="#000000">include</font></u></b>?(_1) }
               .each <b><u><font color="#000000">do</font></u></b> |host|
    <i><font color="silver"># Generate Wireguard configuration for the host reload!</font></i>
    WireguardConfig.new(host, conf[<font color="#808080">'hosts'</font>]).generate! <b><u><font color="#000000">if</font></u></b> options[:generate]
    <i><font color="silver"># Install Wireguard configuration for the host.</font></i>
    InstallConfig.new(host, conf[<font color="#808080">'hosts'</font>]).upload!.install!.reload! <b><u><font color="#000000">if</font></u></b> options[:install]
    <i><font color="silver"># Clean Wireguard configuration for the host.</font></i>
    WireguardConfig.new(host, conf[<font color="#808080">'hosts'</font>]).clean! <b><u><font color="#000000">if</font></u></b> options[:clean]
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">rescue</font></u></b> StandardError =&gt; e
  puts <font color="#808080">"Error: #{e.message}"</font>
  puts e.backtrace.join(<font color="#808080">"\n"</font>)
  exit <font color="#000000">2</font>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>And we also have a <span class='inlinecode'>Rakefile</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>task :generate <b><u><font color="#000000">do</font></u></b>
  ruby <font color="#808080">'wireguardmeshgenerator.rb'</font>, <font color="#808080">'--generate'</font>
<b><u><font color="#000000">end</font></u></b>

task :clean <b><u><font color="#000000">do</font></u></b>
  ruby <font color="#808080">'wireguardmeshgenerator.rb'</font>, <font color="#808080">'--clean'</font>
<b><u><font color="#000000">end</font></u></b>

task :install <b><u><font color="#000000">do</font></u></b>
  ruby <font color="#808080">'wireguardmeshgenerator.rb'</font>, <font color="#808080">'--install'</font>
<b><u><font color="#000000">end</font></u></b>

task default: :generate
</pre>
<br />
<br />
<h2 style='display: inline' id='invoking-the-mesh-network-generator'>Invoking the mesh network generator</h2><br />
<br />
<h3 style='display: inline' id='generating-the-wg0conf-files-and-keys'>Generating the <span class='inlinecode'>wg0.conf</span> files and keys</h3><br />
<br />
<span>To generate everything (the <span class='inlinecode'>wg0.conf</span> of all participating hosts, including all keys involved), we run the following:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; rake generate
/usr/bin/ruby wireguardmeshgenerator.rb --generate
Generating dist/f<font color="#000000">0</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/f<font color="#000000">1</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/f<font color="#000000">2</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/r<font color="#000000">0</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/r<font color="#000000">1</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/r<font color="#000000">2</font>/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/blowfish/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/fishfinger/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/earth/etc/wireguard/wg<font color="#000000">0</font>.conf
Generating dist/pixel7pro/etc/wireguard/wg<font color="#000000">0</font>.conf
</pre>
<br />
<span>It generated all the <span class='inlinecode'>wg0.conf</span> files listed in the output, plus those keys:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; find keys/ -type f
keys/f<font color="#000000">0</font>/priv.key
keys/f<font color="#000000">0</font>/pub.key
keys/psk/f0_f1.key
keys/psk/f0_f2.key
keys/psk/f0_r0.key
keys/psk/f0_r1.key
keys/psk/f0_r2.key
keys/psk/blowfish_f0.key
keys/psk/f0_fishfinger.key
keys/psk/f1_f2.key
keys/psk/f1_r0.key
keys/psk/f1_r1.key
keys/psk/f1_r2.key
keys/psk/blowfish_f1.key
keys/psk/f1_fishfinger.key
keys/psk/f2_r0.key
keys/psk/f2_r1.key
keys/psk/f2_r2.key
keys/psk/blowfish_f2.key
keys/psk/f2_fishfinger.key
keys/psk/r0_r1.key
keys/psk/r0_r2.key
keys/psk/blowfish_r0.key
keys/psk/fishfinger_r0.key
keys/psk/r1_r2.key
keys/psk/blowfish_r1.key
keys/psk/fishfinger_r1.key
keys/psk/blowfish_r2.key
keys/psk/fishfinger_r2.key
keys/psk/blowfish_fishfinger.key
keys/psk/blowfish_earth.key
keys/psk/earth_fishfinger.key
keys/psk/blowfish_pixel7pro.key
keys/psk/fishfinger_pixel7pro.key
keys/f<font color="#000000">1</font>/priv.key
keys/f<font color="#000000">1</font>/pub.key
keys/f<font color="#000000">2</font>/priv.key
keys/f<font color="#000000">2</font>/pub.key
keys/r<font color="#000000">0</font>/priv.key
keys/r<font color="#000000">0</font>/pub.key
keys/r<font color="#000000">1</font>/priv.key
keys/r<font color="#000000">1</font>/pub.key
keys/r<font color="#000000">2</font>/priv.key
keys/r<font color="#000000">2</font>/pub.key
keys/blowfish/priv.key
keys/blowfish/pub.key
keys/fishfinger/priv.key
keys/fishfinger/pub.key
keys/earth/priv.key
keys/earth/pub.key
keys/pixel7pro/priv.key
keys/pixel7pro/pub.key
</pre>
<br />
<span>Those keys are embedded in the resulting <span class='inlinecode'>wg0.conf</span>, so later, we only need to install the <span class='inlinecode'>wg0.conf</span> files and not all the keys individually.</span><br />
<br />
<h3 style='display: inline' id='installing-the-wg0conf-files'>Installing the <span class='inlinecode'>wg0.conf</span> files</h3><br />
<br />
<span>Uploading the <span class='inlinecode'>wg0.conf</span> files to the participating hosts and reloading WireGuard on them is then just a matter of executing (this expects, that all participating hosts are up and running):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; rake install
/usr/bin/ruby wireguardmeshgenerator.rb --install
Uploading dist/f<font color="#000000">0</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to f0.lan.buetow.org:.
Installing Wireguard config on f0
Uploading cmd.sh to f0.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod <font color="#000000">700</font> /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -&gt; /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ doas chmod <font color="#000000">644</font> /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on f0
Uploading cmd.sh to f0.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/f<font color="#000000">1</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to f1.lan.buetow.org:.
Installing Wireguard config on f1
Uploading cmd.sh to f1.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod <font color="#000000">700</font> /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -&gt; /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ doas chmod <font color="#000000">644</font> /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on f1
Uploading cmd.sh to f1.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/f<font color="#000000">2</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to f2.lan.buetow.org:.
Installing Wireguard config on f2
Uploading cmd.sh to f2.lan.buetow.org:.
+ [ ! -d /usr/local/etc/wireguard ]
+ doas chmod <font color="#000000">700</font> /usr/local/etc/wireguard
+ doas mv -v wg0.conf /usr/local/etc/wireguard
wg0.conf -&gt; /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ doas chmod <font color="#000000">644</font> /usr/local/etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on f2
Uploading cmd.sh to f2.lan.buetow.org:.
+ doas service wireguard reload
+ rm cmd.sh
Uploading dist/r<font color="#000000">0</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to r0.lan.buetow.org:.
Installing Wireguard config on r0
Uploading cmd.sh to r0.lan.buetow.org:.
+ <font color="#808080">'['</font> <font color="#808080">'!'</font> -d /etc/wireguard <font color="#808080">']'</font>
+ chmod <font color="#000000">700</font> /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed <font color="#808080">'wg0.conf'</font> -&gt; <font color="#808080">'/etc/wireguard/wg0.conf'</font>
+ chmod <font color="#000000">644</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on r0
Uploading cmd.sh to r0.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/r<font color="#000000">1</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to r1.lan.buetow.org:.
Installing Wireguard config on r1
Uploading cmd.sh to r1.lan.buetow.org:.
+ <font color="#808080">'['</font> <font color="#808080">'!'</font> -d /etc/wireguard <font color="#808080">']'</font>
+ chmod <font color="#000000">700</font> /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed <font color="#808080">'wg0.conf'</font> -&gt; <font color="#808080">'/etc/wireguard/wg0.conf'</font>
+ chmod <font color="#000000">644</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on r1
Uploading cmd.sh to r1.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/r<font color="#000000">2</font>/etc/wireguard/wg<font color="#000000">0</font>.conf to r2.lan.buetow.org:.
Installing Wireguard config on r2
Uploading cmd.sh to r2.lan.buetow.org:.
+ <font color="#808080">'['</font> <font color="#808080">'!'</font> -d /etc/wireguard <font color="#808080">']'</font>
+ chmod <font color="#000000">700</font> /etc/wireguard
+ mv -v wg0.conf /etc/wireguard
renamed <font color="#808080">'wg0.conf'</font> -&gt; <font color="#808080">'/etc/wireguard/wg0.conf'</font>
+ chmod <font color="#000000">644</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on r2
Uploading cmd.sh to r2.lan.buetow.org:.
+ systemctl reload wg-quick@wg0.service
+ rm cmd.sh
Uploading dist/blowfish/etc/wireguard/wg<font color="#000000">0</font>.conf to blowfish.buetow.org:.
Installing Wireguard config on blowfish
Uploading cmd.sh to blowfish.buetow.org:.
+ [ ! -d /etc/wireguard ]
+ doas chmod <font color="#000000">700</font> /etc/wireguard
+ doas mv -v wg0.conf /etc/wireguard
wg0.conf -&gt; /etc/wireguard/wg<font color="#000000">0</font>.conf
+ doas chmod <font color="#000000">644</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on blowfish
Uploading cmd.sh to blowfish.buetow.org:.
+ doas sh /etc/netstart wg0
+ rm cmd.sh
Uploading dist/fishfinger/etc/wireguard/wg<font color="#000000">0</font>.conf to fishfinger.buetow.org:.
Installing Wireguard config on fishfinger
Uploading cmd.sh to fishfinger.buetow.org:.
+ [ ! -d /etc/wireguard ]
+ doas chmod <font color="#000000">700</font> /etc/wireguard
+ doas mv -v wg0.conf /etc/wireguard
wg0.conf -&gt; /etc/wireguard/wg<font color="#000000">0</font>.conf
+ doas chmod <font color="#000000">644</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
+ rm cmd.sh
Reloading Wireguard on fishfinger
Uploading cmd.sh to fishfinger.buetow.org:.
+ doas sh /etc/netstart wg0
+ rm cmd.sh
</pre>
<br />
<h3 style='display: inline' id='re-generating-mesh-and-installing-the-wg0conf-files-again'>Re-generating mesh and installing the <span class='inlinecode'>wg0.conf</span> files again</h3><br />
<br />
<span>The mesh network can be re-generated and re-installed as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; rake clean
&gt; rake generate
&gt; rake install
</pre>
<br />
<span>That would also delete and re-generate all the keys involved.</span><br />
<br />
<h3 style='display: inline' id='setting-up-roaming-clients'>Setting up roaming clients</h3><br />
<br />
<span>For roaming clients like <span class='inlinecode'>earth</span> (Fedora laptop) and <span class='inlinecode'>pixel7pro</span> (Android phone), the setup process differs slightly since these devices are not always accessible via SSH:</span><br />
<br />
<span>Android phone (<span class='inlinecode'>pixel7pro</span>):</span><br />
<br />
<span>The configuration is transferred to the phone using a QR code. The official WireGuard Android app (from Google Play Store) can scan and import the configuration:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; sudo dnf install qrencode
&gt; qrencode -t ansiutf8 &lt; dist/pixel7pro/etc/wireguard/wg<font color="#000000">0</font>.conf
</pre>
<br />
<span>Scan the QR code with the WireGuard app to import the configuration. The phone will then route all traffic through the VPN when the tunnel is activated. Note that WireGuard does not support automatic failover between the two gateways (<span class='inlinecode'>blowfish</span> and <span class='inlinecode'>fishfinger</span>)—if one fails, manual disconnection and reconnection is required to switch to the other.</span><br />
<br />
<span>Fedora laptop (<span class='inlinecode'>earth</span>):</span><br />
<br />
<span>For the laptop, manually copy the generated configuration:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>&gt; sudo cp dist/earth/etc/wireguard/wg<font color="#000000">0</font>.conf /etc/wireguard/
&gt; sudo chmod <font color="#000000">600</font> /etc/wireguard/wg<font color="#000000">0</font>.conf
&gt; sudo systemctl start wg-quick@wg0.service  <i><font color="silver"># Start manually</font></i>
&gt; sudo systemctl disable wg-quick@wg0.service  <i><font color="silver"># Prevent auto-start</font></i>
</pre>
<br />
<span>The service is disabled from auto-start so the VPN is only active when manually started. This allows selective VPN usage based on need.</span><br />
<br />
<h2 style='display: inline' id='adding-ipv6-support-to-the-mesh'>Adding IPv6 support to the mesh</h2><br />
<br />
<span>After setting up the IPv4-only mesh network, I decided to add dual-stack IPv6 support to enable more networking capabilities and prepare for the future. All 10 hosts (8 infrastructure + 2 roaming clients) now have both IPv4 and IPv6 addresses on their WireGuard interfaces.</span><br />
<br />
<h3 style='display: inline' id='ipv6-addressing-scheme'>IPv6 addressing scheme</h3><br />
<br />
<span>We use ULA (Unique Local Address) private IPv6 space, analogous to RFC1918 private IPv4 addresses:</span><br />
<br />
<ul>
<li>Prefix: <span class='inlinecode'>fd42:beef:cafe::/48</span></li>
<li>Subnet: <span class='inlinecode'>fd42:beef:cafe:2::/64</span> (wg0 interfaces)</li>
</ul><br />
<span>All hosts receive dual-stack addresses:</span><br />
<br />
<pre>
fd42:beef:cafe:2::110/64  - blowfish.wg0 (OpenBSD gateway)
fd42:beef:cafe:2::111/64  - fishfinger.wg0 (OpenBSD gateway)
fd42:beef:cafe:2::120/64  - r0.wg0 (Rocky Linux VM)
fd42:beef:cafe:2::121/64  - r1.wg0 (Rocky Linux VM)
fd42:beef:cafe:2::122/64  - r2.wg0 (Rocky Linux VM)
fd42:beef:cafe:2::130/64  - f0.wg0 (FreeBSD host)
fd42:beef:cafe:2::131/64  - f1.wg0 (FreeBSD host)
fd42:beef:cafe:2::132/64  - f2.wg0 (FreeBSD host)
fd42:beef:cafe:2::200/64  - earth.wg0 (roaming laptop)
fd42:beef:cafe:2::201/64  - pixel7pro.wg0 (roaming phone)
</pre>
<br />
<h3 style='display: inline' id='updating-the-mesh-generator-for-ipv6'>Updating the mesh generator for IPv6</h3><br />
<br />
<span>The mesh generator required two modifications to support dual-stack configurations:</span><br />
<br />
<span>**1. Address generation (<span class='inlinecode'>address</span> method)**</span><br />
<br />
<span>The generator now outputs multiple <span class='inlinecode'>Address</span> directives when IPv6 is present:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">def</font></u></b> address
  <b><u><font color="#000000">return</font></u></b> <font color="#808080">'# No Address = ... for OpenBSD here'</font> <b><u><font color="#000000">if</font></u></b> hosts[myself][<font color="#808080">'os'</font>] == <font color="#808080">'OpenBSD'</font>

  ipv4 = hosts[myself][<font color="#808080">'wg0'</font>][<font color="#808080">'ip'</font>]
  ipv6 = hosts[myself][<font color="#808080">'wg0'</font>][<font color="#808080">'ipv6'</font>]

  <i><font color="silver"># WireGuard supports multiple Address directives for dual-stack</font></i>
  <b><u><font color="#000000">if</font></u></b> ipv6
    <font color="#808080">"Address = #{ipv4}\nAddress = #{ipv6}/64"</font>
  <b><u><font color="#000000">else</font></u></b>
    <font color="#808080">"Address = #{ipv4}"</font>
  <b><u><font color="#000000">end</font></u></b>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>**2. AllowedIPs generation (<span class='inlinecode'>peers</span> method)**</span><br />
<br />
<span>For mesh peers, both IPv4 and IPv6 addresses are included in AllowedIPs:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">if</font></u></b> is_roaming
  allowed_ips = <font color="#808080">'0.0.0.0/0, ::/0'</font>
<b><u><font color="#000000">else</font></u></b>
  <i><font color="silver"># For mesh peers, allow both IPv4 and IPv6 if present</font></i>
  ipv4 = data[<font color="#808080">'wg0'</font>][<font color="#808080">'ip'</font>]
  ipv6 = data[<font color="#808080">'wg0'</font>][<font color="#808080">'ipv6'</font>]
  allowed_ips = ipv6 ? <font color="#808080">"#{ipv4}/32, #{ipv6}/128"</font> : <font color="#808080">"#{ipv4}/32"</font>
<b><u><font color="#000000">end</font></u></b>
</pre>
<br />
<span>Roaming clients keep <span class='inlinecode'>AllowedIPs = 0.0.0.0/0, ::/0</span> to route all traffic (IPv4 and IPv6) through the VPN.</span><br />
<br />
<h3 style='display: inline' id='ipv6-nat-on-openbsd-gateways'>IPv6 NAT on OpenBSD gateways</h3><br />
<br />
<span>To allow roaming clients to access the internet via IPv6, we added NAT66 rules to the OpenBSD gateways&#39; <span class='inlinecode'>pf.conf</span>:</span><br />
<br />
<pre>
# NAT for WireGuard clients to access internet (IPv4)
match out on vio0 from 192.168.2.0/24 to any nat-to (vio0)

# NAT66 for WireGuard clients to access internet (IPv6)
# Uses NPTv6 (Network Prefix Translation) to translate ULA to public IPv6
match out on vio0 inet6 from fd42:beef:cafe:2::/64 to any nat-to (vio0)

# Allow all UDP traffic on WireGuard port (IPv4 and IPv6)
pass in inet proto udp from any to any port 56709
pass in inet6 proto udp from any to any port 56709
</pre>
<br />
<span>OpenBSD&#39;s PF firewall supports IPv6 NAT with the same syntax as IPv4, using NPTv6 (RFC 6296) to translate the ULA addresses to the gateway&#39;s public IPv6 address.</span><br />
<br />
<h3 style='display: inline' id='manual-openbsd-interface-configuration'>Manual OpenBSD interface configuration</h3><br />
<br />
<span>Since OpenBSD doesn&#39;t use the <span class='inlinecode'>Address</span> directive in WireGuard configs, IPv6 must be manually configured on the wg0 interfaces. On <span class='inlinecode'>blowfish</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>rex@blowfish:~ $ doas vi /etc/hostname.wg0
</pre>
<br />
<span>Add the IPv6 address (note the order - IPv6 must be configured before <span class='inlinecode'>up</span>):</span><br />
<br />
<pre>
inet 192.168.2.110 255.255.255.0 NONE
inet6 fd42:beef:cafe:2::110 64
up
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf
</pre>
<br />
<span>Important: The IPv6 address must be specified before the <span class='inlinecode'>up</span> directive. This ensures the interface has both addresses configured before WireGuard peers are loaded.</span><br />
<br />
<span>Apply the configuration:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>rex@blowfish:~ $ doas sh /etc/netstart wg0
rex@blowfish:~ $ ifconfig wg0 | grep inet6
inet6 fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font> prefixlen <font color="#000000">64</font>
</pre>
<br />
<span>Repeat for <span class='inlinecode'>fishfinger</span> with address <span class='inlinecode'>fd42:beef:cafe:2::111</span>.</span><br />
<br />
<span>After reboot, the interface will automatically come up with both IPv4 and IPv6 addresses. WireGuard peers may take 30-60 seconds to establish handshakes after boot.</span><br />
<br />
<h3 style='display: inline' id='verifying-dual-stack-connectivity'>Verifying dual-stack connectivity</h3><br />
<br />
<span>After regenerating and deploying the configurations, both IPv4 and IPv6 work across the mesh:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># From r0 (Rocky Linux VM)</font></i>
root@r0:~ <i><font color="silver"># ping -c 2 192.168.2.130  # IPv4 to f0</font></i>
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.130</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">2.12</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.130</font>: icmp_seq=<font color="#000000">2</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.681</font> ms

root@r0:~ <i><font color="silver"># ping6 -c 2 fd42:beef:cafe:2::130  # IPv6 to f0</font></i>
<font color="#000000">64</font> bytes from fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">130</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">2.16</font> ms
<font color="#000000">64</font> bytes from fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">130</font>: icmp_seq=<font color="#000000">2</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.909</font> ms
</pre>
<br />
<span>The dual-stack configuration is backward compatible—hosts without the <span class='inlinecode'>ipv6</span> field in the YAML configuration will continue to generate IPv4-only configs.</span><br />
<br />
<h3 style='display: inline' id='benefits-of-dual-stack'>Benefits of dual-stack</h3><br />
<br />
<span>Adding IPv6 to the mesh network provides:</span><br />
<br />
<ul>
<li>Future-proofing: Ready for IPv6-only services and networks</li>
<li>Compatibility: Dual-stack maintains full IPv4 compatibility</li>
<li>Learning: Hands-on experience with IPv6 networking</li>
<li>Flexibility: Roaming clients can access both IPv4 and IPv6 internet resources</li>
</ul><br />
<h2 style='display: inline' id='happy-wireguard-ing'>Happy WireGuard-ing</h2><br />
<br />
<span>All is set up now. E.g. on <span class='inlinecode'>f0</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas wg show
interface: wg0
  public key: Jm6YItMt94++dIeOyVi1I9AhNt2qQcryxCZezoX7X2Y=
  private key: (hidden)
  listening port: <font color="#000000">56709</font>

peer: 8PvGZH1NohHpZPVJyjhctBX9xblsNvYBhpg68FsFcns=
  preshared key: (hidden)
  endpoint: <font color="#000000">46.23</font>.<font color="#000000">94.99</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.111</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">111</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">1</font> minute, <font color="#000000">46</font> seconds ago
  transfer: <font color="#000000">124</font> B received, <font color="#000000">1.75</font> KiB sent
  persistent keepalive: every <font color="#000000">25</font> seconds

peer: Xow+d3qVXgUMk4pcRSQ6Fe+vhYBa3VDyHX/4jrGoKns=
  preshared key: (hidden)
  endpoint: <font color="#000000">23.88</font>.<font color="#000000">35.144</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.110</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">1</font> minute, <font color="#000000">52</font> seconds ago
  transfer: <font color="#000000">124</font> B received, <font color="#000000">1.60</font> KiB sent
  persistent keepalive: every <font color="#000000">25</font> seconds

peer: s3e93XoY7dPUQgLiVO4d8x/SRCFgEew+/wP<font color="#000000">7</font>+zwgehI=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.120</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.120</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">120</font>/<font color="#000000">128</font>

peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.131</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.131</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">131</font>/<font color="#000000">128</font>

peer: 0Y/H20W8YIbF7DA1sMwMacLI8WS9yG+<font color="#000000">1</font>/QO7m2oyllg=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.122</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.122</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">122</font>/<font color="#000000">128</font>

peer: Hhy9kMPOOjChXV2RA5WeCGs+J0FE3rcNPDw/TLSn7i8=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.121</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.121</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">121</font>/<font color="#000000">128</font>

peer: SlGVsACE1wiaRoGvCR3f7AuHfRS+1jjhS+YwEJ2HvF0=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.132</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.132</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">132</font>/<font color="#000000">128</font>
</pre>
<br />
<span>All the hosts are pingable as well, e.g.:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % foreach peer ( f1 f2 r0 r1 r2 blowfish fishfinger )
foreach? ping -c<font color="#000000">2</font> $peer.wg0
foreach? echo
foreach? end
PING f1.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.131</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.131</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.334</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.131</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.260</font> ms

--- f1.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">0.260</font>/<font color="#000000">0.297</font>/<font color="#000000">0.334</font>/<font color="#000000">0.037</font> ms

PING f2.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.132</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.132</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.323</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.132</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.303</font> ms

--- f2.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">0.303</font>/<font color="#000000">0.313</font>/<font color="#000000">0.323</font>/<font color="#000000">0.010</font> ms

PING r0.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.120</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.120</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.716</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.120</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.406</font> ms

--- r0.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">0.406</font>/<font color="#000000">0.561</font>/<font color="#000000">0.716</font>/<font color="#000000">0.155</font> ms

PING r1.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.121</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.121</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.639</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.121</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.629</font> ms

--- r1.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">0.629</font>/<font color="#000000">0.634</font>/<font color="#000000">0.639</font>/<font color="#000000">0.005</font> ms

PING r2.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.122</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.122</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.569</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.122</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">64</font> time=<font color="#000000">0.479</font> ms

--- r2.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">0.479</font>/<font color="#000000">0.524</font>/<font color="#000000">0.569</font>/<font color="#000000">0.045</font> ms

PING blowfish.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.110</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.110</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">255</font> time=<font color="#000000">35.745</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.110</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">255</font> time=<font color="#000000">35.481</font> ms

--- blowfish.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">35.481</font>/<font color="#000000">35.613</font>/<font color="#000000">35.745</font>/<font color="#000000">0.132</font> ms

PING fishfinger.wg0 (<font color="#000000">192.168</font>.<font color="#000000">2.111</font>): <font color="#000000">56</font> data bytes
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.111</font>: icmp_seq=<font color="#000000">0</font> ttl=<font color="#000000">255</font> time=<font color="#000000">33.992</font> ms
<font color="#000000">64</font> bytes from <font color="#000000">192.168</font>.<font color="#000000">2.111</font>: icmp_seq=<font color="#000000">1</font> ttl=<font color="#000000">255</font> time=<font color="#000000">33.751</font> ms

--- fishfinger.wg0 ping statistics ---
<font color="#000000">2</font> packets transmitted, <font color="#000000">2</font> packets received, <font color="#000000">0.0</font>% packet loss
round-trip min/avg/max/stddev = <font color="#000000">33.751</font>/<font color="#000000">33.872</font>/<font color="#000000">33.992</font>/<font color="#000000">0.120</font> ms
</pre>
<br />
<span>Note that the loop above is a <span class='inlinecode'>tcsh</span> loop, the default shell used in FreeBSD. Of course, all other peers can ping their peers as well!</span><br />
<br />
<span>After the first ping, VPN tunnels now also show handshakes and the amount of data transferred through them:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas wg show
interface: wg0
  public key: Jm6YItMt94++dIeOyVi1I9AhNt2qQcryxCZezoX7X2Y=
  private key: (hidden)
  listening port: <font color="#000000">56709</font>

peer: 0Y/H20W8YIbF7DA1sMwMacLI8WS9yG+<font color="#000000">1</font>/QO7m2oyllg=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.122</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.122</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">122</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">10</font> seconds ago
  transfer: <font color="#000000">440</font> B received, <font color="#000000">532</font> B sent

peer: Hhy9kMPOOjChXV2RA5WeCGs+J0FE3rcNPDw/TLSn7i8=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.121</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.121</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">121</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">12</font> seconds ago
  transfer: <font color="#000000">440</font> B received, <font color="#000000">564</font> B sent

peer: s3e93XoY7dPUQgLiVO4d8x/SRCFgEew+/wP<font color="#000000">7</font>+zwgehI=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.120</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.120</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">120</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">14</font> seconds ago
  transfer: <font color="#000000">440</font> B received, <font color="#000000">564</font> B sent

peer: SlGVsACE1wiaRoGvCR3f7AuHfRS+1jjhS+YwEJ2HvF0=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.132</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.132</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">132</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">17</font> seconds ago
  transfer: <font color="#000000">472</font> B received, <font color="#000000">564</font> B sent

peer: Xow+d3qVXgUMk4pcRSQ6Fe+vhYBa3VDyHX/4jrGoKns=
  preshared key: (hidden)
  endpoint: <font color="#000000">23.88</font>.<font color="#000000">35.144</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.110</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">110</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">55</font> seconds ago
  transfer: <font color="#000000">472</font> B received, <font color="#000000">596</font> B sent
  persistent keepalive: every <font color="#000000">25</font> seconds

peer: 8PvGZH1NohHpZPVJyjhctBX9xblsNvYBhpg68FsFcns=
  preshared key: (hidden)
  endpoint: <font color="#000000">46.23</font>.<font color="#000000">94.99</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.111</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">111</font>/<font color="#000000">128</font>
  latest handshake: <font color="#000000">55</font> seconds ago
  transfer: <font color="#000000">472</font> B received, <font color="#000000">596</font> B sent
  persistent keepalive: every <font color="#000000">25</font> seconds

peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8=
  preshared key: (hidden)
  endpoint: <font color="#000000">192.168</font>.<font color="#000000">1.131</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">192.168</font>.<font color="#000000">2.131</font>/<font color="#000000">32</font>, fd42:beef:cafe:<font color="#000000">2</font>::<font color="#000000">131</font>/<font color="#000000">128</font>
</pre>
<br />
<h2 style='display: inline' id='managing-roaming-client-tunnels'>Managing Roaming Client Tunnels</h2><br />
<br />
<span>Since roaming clients like <span class='inlinecode'>earth</span> and <span class='inlinecode'>pixel7pro</span> connect on-demand rather than being always-on like the infrastructure hosts, it&#39;s useful to know how to configure and manage the WireGuard tunnels.</span><br />
<br />
<h3 style='display: inline' id='manual-gateway-failover-configuration'>Manual gateway failover configuration</h3><br />
<br />
<span>The default configuration for roaming clients includes both gateways (blowfish and fishfinger) with <span class='inlinecode'>AllowedIPs = 0.0.0.0/0, ::/0</span>. However, WireGuard doesn&#39;t automatically failover between multiple peers with identical <span class='inlinecode'>AllowedIPs</span> routes. When both gateways are configured this way, WireGuard uses the first peer with a recent handshake. If that gateway goes down, traffic won&#39;t automatically switch to the backup gateway.</span><br />
<br />
<span>To enable manual failover, separate configuration files can be created for roaming clients (earth laptop and pixel7pro phone), each containing only a single gateway peer. This provides explicit control over which gateway handles traffic.</span><br />
<br />
<span>Configuration files for pixel7pro (phone):</span><br />
<br />
<span>Two separate configs in <span class='inlinecode'>/home/paul/git/wireguardmeshgenerator/dist/pixel7pro/etc/wireguard/</span>:</span><br />
<br />
<ul>
<li>wg0-blowfish.conf - Routes all traffic through blowfish gateway (23.88.35.144)</li>
<li>wg0-fishfinger.conf - Routes all traffic through fishfinger gateway (46.23.94.99)</li>
</ul><br />
<span>Generate QR codes for importing into the WireGuard Android app:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>qrencode -t ansiutf8 &lt; dist/pixel7pro/etc/wireguard/wg<font color="#000000">0</font>-blowfish.conf
qrencode -t ansiutf8 &lt; dist/pixel7pro/etc/wireguard/wg<font color="#000000">0</font>-fishfinger.conf
</pre>
<br />
<span>Import both QR codes using the WireGuard app to create two separate tunnel profiles. You can then manually enable/disable each tunnel to select which gateway to use. Only enable one tunnel at a time.</span><br />
<br />
<span>Configuration files for earth (laptop):</span><br />
<br />
<span>Two separate configs in <span class='inlinecode'>/home/paul/git/wireguardmeshgenerator/dist/earth/etc/wireguard/</span>:</span><br />
<br />
<ul>
<li>wg0-blowfish.conf - Routes all traffic through blowfish gateway</li>
<li>wg0-fishfinger.conf - Routes all traffic through fishfinger gateway</li>
</ul><br />
<span>Install both configurations:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>sudo cp dist/earth/etc/wireguard/wg<font color="#000000">0</font>-blowfish.conf /etc/wireguard/
sudo cp dist/earth/etc/wireguard/wg<font color="#000000">0</font>-fishfinger.conf /etc/wireguard/
</pre>
<br />
<span>This approach provides explicit control over which gateway handles roaming client traffic, useful when one gateway needs maintenance or experiences connectivity issues.</span><br />
<br />
<h3 style='display: inline' id='starting-and-stopping-on-earth-fedora-laptop'>Starting and stopping on earth (Fedora laptop)</h3><br />
<br />
<span>On the Fedora laptop, WireGuard is managed via systemd. Using the separate gateway configs:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Start with blowfish gateway</font></i>
earth$ sudo systemctl start wg-quick@wg0-blowfish.service

<i><font color="silver"># Or start with fishfinger gateway</font></i>
earth$ sudo systemctl start wg-quick@wg0-fishfinger.service

<i><font color="silver"># Check tunnel status (example with blowfish gateway)</font></i>
earth$ sudo wg show
interface: wg0
  public key: Mc1CpSS3rbLN9A2w9c75XugQyXUkGPHKI2iCGbh8DRo=
  private key: (hidden)
  listening port: <font color="#000000">56709</font>
  fwmark: <font color="#000000">0xca6c</font>

peer: Xow+d3qVXgUMk4pcRSQ6Fe+vhYBa3VDyHX/4jrGoKns=
  preshared key: (hidden)
  endpoint: <font color="#000000">23.88</font>.<font color="#000000">35.144</font>:<font color="#000000">56709</font>
  allowed ips: <font color="#000000">0.0</font>.<font color="#000000">0.0</font>/<font color="#000000">0</font>, ::/<font color="#000000">0</font>
  latest handshake: <font color="#000000">5</font> seconds ago
  transfer: <font color="#000000">15.89</font> KiB received, <font color="#000000">32.15</font> KiB sent
  persistent keepalive: every <font color="#000000">25</font> seconds
</pre>
<br />
<span>Stopping the tunnel:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>earth$ sudo systemctl stop wg-quick@wg0-blowfish.service
<i><font color="silver"># Or if using fishfinger:</font></i>
earth$ sudo systemctl stop wg-quick@wg0-fishfinger.service

earth$ sudo wg show
<i><font color="silver"># No output - WireGuard interface is down</font></i>
</pre>
<br />
<span>Switching between gateways:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># Switch from blowfish to fishfinger</font></i>
earth$ sudo systemctl stop wg-quick@wg0-blowfish.service
earth$ sudo systemctl start wg-quick@wg0-fishfinger.service
</pre>
<br />
<span>The services remain <span class='inlinecode'>disabled</span> to prevent auto-start on boot, allowing manual control of when the VPN is active and which gateway to use.</span><br />
<br />
<h3 style='display: inline' id='starting-and-stopping-on-pixel7pro-android-phone'>Starting and stopping on pixel7pro (Android phone)</h3><br />
<br />
<span>On Android using the official WireGuard app, you now have two tunnel profiles (wg0-blowfish and wg0-fishfinger) after importing the QR codes:</span><br />
<br />
<span>Starting a tunnel:</span><br />
<br />
<ul>
<li>1. Open the WireGuard app</li>
<li>2. Tap the toggle switch next to either <span class='inlinecode'>wg0-blowfish</span> or <span class='inlinecode'>wg0-fishfinger</span> tunnel configuration</li>
<li>3. The switch turns blue/green and shows "Active"</li>
<li>4. A key icon appears in the notification bar indicating VPN is active</li>
<li>5. All traffic now routes through the selected gateway</li>
</ul><br />
<span>Stopping the tunnel:</span><br />
<br />
<ul>
<li>1. Open the WireGuard app</li>
<li>2. Tap the toggle switch again to disable it</li>
<li>3. The switch turns gray and shows "Inactive"</li>
<li>4. The notification bar key icon disappears</li>
<li>5. Normal internet routing resumes</li>
</ul><br />
<span>Switching between gateways:</span><br />
<br />
<ul>
<li>1. Disable the currently active tunnel (e.g., wg0-blowfish)</li>
<li>2. Enable the other tunnel (e.g., wg0-fishfinger)</li>
<li>Only enable one tunnel at a time</li>
</ul><br />
<span>Quick toggling from notification:</span><br />
<br />
<ul>
<li>Pull down the notification shade</li>
<li>Tap the WireGuard notification to quickly enable/disable the tunnel without opening the app</li>
</ul><br />
<span>The WireGuard Android app supports automatically activating tunnels based on:</span><br />
<br />
<ul>
<li>Mobile data connection (e.g., enable VPN when on cellular)</li>
<li>WiFi SSID (e.g., disable VPN when on trusted home network)</li>
<li>Ethernet connection status</li>
</ul><br />
<span>These settings can be configured by tapping the pencil icon next to the tunnel name, then scrolling to "Toggle on/off based on" options.</span><br />
<br />
<h3 style='display: inline' id='verifying-connectivity'>Verifying connectivity</h3><br />
<br />
<span>Once the tunnel is active on either device, verify connectivity:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># From earth laptop:</font></i>
earth$ ping -c<font color="#000000">2</font> blowfish.wg0
earth$ ping -c<font color="#000000">2</font> fishfinger.wg0
earth$ curl https://ifconfig.me  <i><font color="silver"># Should show gateway's public IP</font></i>
</pre>
<br />
<span>Check which gateway is active: Check the transfer statistics with <span class='inlinecode'>sudo wg show</span> on earth to see which peer shows recent handshakes and increasing transfer bytes. On Android, the WireGuard app shows the active tunnel with data transfer statistics.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>Having a mesh network on our hosts is great for securing all the traffic between them for our future k3s setup. A self-managed WireGuard mesh network is better than Tailscale as it eliminates reliance on a third party and provides full control over the configuration. It reduces unnecessary abstraction and "magic," enabling easier debugging and ensuring full ownership of our network.</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Terminal multiplexing with `tmux` - Fish edition</title>
        <link href="https://foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html" />
        <id>https://foo.zone/gemfeed/2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html</id>
        <updated>2025-05-02T00:09:23+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the Fish shell edition of the same post (but for Z-Shell) of mine from last year:</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='terminal-multiplexing-with-tmux---fish-edition'>Terminal multiplexing with <span class='inlinecode'>tmux</span> - Fish edition</h1><br />
<br />
<span class='quote'>Published at 2025-05-02T00:09:23+03:00</span><br />
<br />
<span>This is the Fish shell edition of the same post (but for Z-Shell) of mine from last year:</span><br />
<br />
<a class='textlink' href='./2024-06-23-terminal-multiplexing-with-tmux.html'>./2024-06-23-terminal-multiplexing-with-tmux.html</a><br />
<br />
<span>Tmux (Terminal Multiplexer) is a powerful, terminal-based tool that manages multiple terminal sessions within a single window. Here are some of its primary features and functionalities:</span><br />
<br />
<ul>
<li>Session management</li>
<li>Window and Pane management</li>
<li>Persistent Workspace</li>
<li>Customization</li>
</ul><br />
<a class='textlink' href='https://github.com/tmux/tmux/wiki'>https://github.com/tmux/tmux/wiki</a><br />
<br />
<pre>
            _______                           s
           |.-----.|                           s
           || Tmux||                          s
           ||_.-._||       |\   \\\\__     o          s
           `--)-(--`       | \_/    o \    o          s
          __[=== o]__      &gt; _   (( &lt;_  oo            s
         |:::::::::::|\    | / \__+___/               s
   jgs   `-=========-`()   |/     |/                  s
       mod. by Paul B.
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#terminal-multiplexing-with-tmux---fish-edition'>Terminal multiplexing with <span class='inlinecode'>tmux</span> - Fish edition</a></li>
<li>⇢ <a href='#before-continuing'>Before continuing...</a></li>
<li>⇢ <a href='#shell-aliases'>Shell aliases</a></li>
<li>⇢ <a href='#the-tn-alias---creating-a-new-session'>The <span class='inlinecode'>tn</span> alias - Creating a new session</a></li>
<li>⇢ ⇢ <a href='#cleaning-up-default-sessions-automatically'>Cleaning up default sessions automatically</a></li>
<li>⇢ ⇢ <a href='#renaming-sessions'>Renaming sessions</a></li>
<li>⇢ <a href='#the-ta-alias---attaching-to-a-session'>The <span class='inlinecode'>ta</span> alias - Attaching to a session</a></li>
<li>⇢ <a href='#the-tr-alias---for-a-nested-remote-session'>The <span class='inlinecode'>tr</span> alias - For a nested remote session</a></li>
<li>⇢ ⇢ <a href='#change-of-the-tmux-prefix-for-better-nesting'>Change of the Tmux prefix for better nesting</a></li>
<li>⇢ <a href='#the-ts-alias---searching-sessions-with-fuzzy-finder'>The <span class='inlinecode'>ts</span> alias - Searching sessions with fuzzy finder</a></li>
<li>⇢ <a href='#the-tssh-alias---cluster-ssh-replacement'>The <span class='inlinecode'>tssh</span> alias - Cluster SSH replacement</a></li>
<li>⇢ ⇢ <a href='#the-tmuxtsshfromargument-helper'>The <span class='inlinecode'>tmux::tssh_from_argument</span> helper</a></li>
<li>⇢ ⇢ <a href='#the-tmuxtsshfromfile-helper'>The <span class='inlinecode'>tmux::tssh_from_file</span> helper</a></li>
<li>⇢ ⇢ <a href='#tssh-examples'><span class='inlinecode'>tssh</span> examples</a></li>
<li>⇢ ⇢ <a href='#common-tmux-commands-i-use-in-tssh'>Common Tmux commands I use in <span class='inlinecode'>tssh</span></a></li>
<li>⇢ <a href='#copy-and-paste-workflow'>Copy and paste workflow</a></li>
<li>⇢ <a href='#tmux-configurations'>Tmux configurations</a></li>
</ul><br />
<h2 style='display: inline' id='before-continuing'>Before continuing...</h2><br />
<br />
<span>Before continuing to read this post, I encourage you to get familiar with Tmux first (unless you already know the basics). You can go through the official getting started guide:</span><br />
<br />
<a class='textlink' href='https://github.com/tmux/tmux/wiki/Getting-Started'>https://github.com/tmux/tmux/wiki/Getting-Started</a><br />
<br />
<span>I can also recommend this book (this is the book I got started with with Tmux):</span><br />
<br />
<a class='textlink' href='https://pragprog.com/titles/bhtmux2/tmux-2/'>https://pragprog.com/titles/bhtmux2/tmux-2/</a><br />
<br />
<span>Over the years, I have built a couple of shell helper functions to optimize my workflows. Tmux is extensively integrated into my daily workflows (personal and work). I had colleagues asking me about my Tmux config and helper scripts for Tmux several times. It would be neat to blog about it so that everyone interested in it can make a copy of my configuration and scripts.</span><br />
<br />
<span>The configuration and scripts in this blog post are only the non-work-specific parts. There are more helper scripts, which I only use for work (and aren&#39;t really useful outside of work due to the way servers and clusters are structured there).</span><br />
<br />
<span>Tmux is highly configurable, and I think I am only scratching the surface of what is possible with it. Nevertheless, it may still be useful for you. I also love that Tmux is part of the OpenBSD base system!</span><br />
<br />
<h2 style='display: inline' id='shell-aliases'>Shell aliases</h2><br />
<br />
<span>Since last week, I am playing a bit with the Fish shell. As a result, I also converted all my tmux helper scripts (mentioned in this blog post) from Z-Shell to Fish.</span><br />
<br />
<a class='textlink' href='https://fishshell.com'>https://fishshell.com</a><br />
<br />
<span>For the most common Tmux commands I use, I have created the following shell aliases:</span><br />
<br />
<pre>
alias tn &#39;tmux::new&#39;
alias ta &#39;tmux::attach&#39;
alias tx &#39;tmux::remote&#39;
alias ts &#39;tmux::search&#39;
alias tssh &#39;tmux::cluster_ssh&#39;
alias tm tmux
alias tl &#39;tmux list-sessions&#39;
alias foo &#39;tmux::new foo&#39;
alias bar &#39;tmux::new bar&#39;
alias baz &#39;tmux::new baz&#39;
</pre>
<br />
<span>Note all <span class='inlinecode'>tmux::...</span>; those are custom shell functions doing certain things, and they aren&#39;t part of the Tmux distribution. But let&#39;s run through every aliases one by one. </span><br />
<br />
<span>The first two are pretty straightforward. <span class='inlinecode'>tm</span> is simply a shorthand for <span class='inlinecode'>tmux</span>, so I have to type less, and <span class='inlinecode'>tl</span> lists all Tmux sessions that are currently open. No magic here.</span><br />
<br />
<h2 style='display: inline' id='the-tn-alias---creating-a-new-session'>The <span class='inlinecode'>tn</span> alias - Creating a new session</h2><br />
<br />
<span>The <span class='inlinecode'>tn</span> alias is referencing this function:</span><br />
<br />
<pre>
# Create new session and if alread exists attach to it
function tmux::new
    set -l session $argv[1]
    _tmux::cleanup_default
    if test -z "$session"
        tmux::new (string join "" T (date +%s))
    else
        tmux new-session -d -s $session
        tmux -2 attach-session -t $session || tmux -2 switch-client -t $session
    end
end
</pre>
<br />
<span>There is a lot going on here. Let&#39;s have a detailed look at what it is doing. </span><br />
<br />
<span>First, a Tmux session name can be passed to the function as a first argument. That session name is only optional. Without it, Tmux will select a session named <span class='inlinecode'>(string join "" T (date +%s))</span> as a default. Which is T followed by the UNIX epoch, e.g. <span class='inlinecode'>T1717133796</span>.</span><br />
<br />
<h3 style='display: inline' id='cleaning-up-default-sessions-automatically'>Cleaning up default sessions automatically</h3><br />
<br />
<span>Note also the call to <span class='inlinecode'>_tmux::cleanup_default</span>; it would clean up all already opened default sessions if they aren&#39;t attached. Those sessions were only temporary, and I had too many flying around after a while. So, I decided to auto-delete the sessions if they weren&#39;t attached. If I want to keep sessions around, I will rename them with the Tmux command <span class='inlinecode'>prefix-key $</span>. This is the cleanup function:</span><br />
<br />
<pre>
function _tmux::cleanup_default
    tmux list-sessions | string match -r &#39;^T.*: &#39; | string match -v -r attached | string split &#39;:&#39; | while read -l s
        echo "Killing $s"
        tmux kill-session -t "$s"
    end
end
</pre>
<br />
<span>The cleanup function kills all open Tmux sessions that haven&#39;t been renamed properly yet—but only if they aren&#39;t attached (e.g., don&#39;t run in the foreground in any terminal). Cleaning them up automatically keeps my Tmux sessions as neat and tidy as possible. </span><br />
<br />
<h3 style='display: inline' id='renaming-sessions'>Renaming sessions</h3><br />
<br />
<span>Whenever I am in a temporary session (named <span class='inlinecode'>T....</span>), I may decide that I want to keep this session around. I have to rename the session to prevent the cleanup function from doing its thing. That&#39;s, as mentioned already, easily accomplished with the standard <span class='inlinecode'>prefix-key $</span> Tmux command.</span><br />
<br />
<h2 style='display: inline' id='the-ta-alias---attaching-to-a-session'>The <span class='inlinecode'>ta</span> alias - Attaching to a session</h2><br />
<br />
<span>This alias refers to the following function, which tries to attach to an already-running Tmux session.</span><br />
<br />
<pre>
function tmux::attach
    set -l session $argv[1]
    if test -z "$session"
        tmux attach-session || tmux::new
    else
        tmux attach-session -t $session || tmux::new $session
    end
end
</pre>
<br />
<span>If no session is specified (as the argument of the function), it will try to attach to the first open session. If no Tmux server is running, it will create a new one with <span class='inlinecode'>tmux::new</span>. Otherwise, with a session name given as the argument, it will attach to it. If unsuccessful (e.g., the session doesn&#39;t exist), it will be created and attached to.</span><br />
<br />
<h2 style='display: inline' id='the-tr-alias---for-a-nested-remote-session'>The <span class='inlinecode'>tr</span> alias - For a nested remote session</h2><br />
<br />
<span>This SSHs into the remote server specified and then, remotely on the server itself, starts a nested Tmux session. So we have one Tmux session on the local computer and, inside of it, an SSH connection to a remote server with a Tmux session running again. The benefit of this is that, in case my network connection breaks down, the next time I connect, I can continue my work on the remote server exactly where I left off. The session name is the name of the server being SSHed into. If a session like this already exists, it simply attaches to it.</span><br />
<br />
<pre>
function tmux::remote
    set -l server $argv[1]
    tmux new -s $server "ssh -A -t $server &#39;tmux attach-session || tmux&#39;" || tmux attach-session -d -t $server
end
</pre>
<br />
<h3 style='display: inline' id='change-of-the-tmux-prefix-for-better-nesting'>Change of the Tmux prefix for better nesting</h3><br />
<br />
<span>To make nested Tmux sessions work smoothly, one must change the Tmux prefix key locally or remotely. By default, the Tmux prefix key is <span class='inlinecode'>Ctrl-b</span>, so <span class='inlinecode'>Ctrl-b $</span>, for example, renames the current session. To change the prefix key from the standard <span class='inlinecode'>Ctrl-b</span> to, for example, <span class='inlinecode'>Ctrl-g</span>, you must add this to the <span class='inlinecode'>tmux.conf</span>:</span><br />
<br />
<pre>
set-option -g prefix C-g
</pre>
<br />
<span>This way, when I want to rename the remote Tmux session, I have to use <span class='inlinecode'>Ctrl-g $</span>, and when I want to rename the local Tmux session, I still have to use <span class='inlinecode'>Ctrl-b $</span>. In my case, I have this deployed to all remote servers through a configuration management system (out of scope for this blog post).</span><br />
<br />
<span>There might also be another way around this (without reconfiguring the prefix key), but that is cumbersome to use, as far as I remember. </span><br />
<br />
<h2 style='display: inline' id='the-ts-alias---searching-sessions-with-fuzzy-finder'>The <span class='inlinecode'>ts</span> alias - Searching sessions with fuzzy finder</h2><br />
<br />
<span>Despite the fact that with <span class='inlinecode'>_tmux::cleanup_default</span>, I don&#39;t leave a huge mess with trillions of Tmux sessions flying around all the time, at times, it can become challenging to find exactly the session I am currently interested in. After a busy workday, I often end up with around twenty sessions on my laptop. This is where fuzzy searching for session names comes in handy, as I often don&#39;t remember the exact session names.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>function tmux::search
    <b><u><font color="#000000">set</font></u></b> -l session (tmux list-sessions | fzf | cut -d: -f<font color="#000000">1</font>)
    <b><u><font color="#000000">if</font></u></b> <b><u><font color="#000000">test</font></u></b> -z <font color="#808080">"$TMUX"</font>
        tmux attach-session -t $session
    <b><u><font color="#000000">else</font></u></b>
        tmux switch -t $session
    end
end
</pre>
<br />
<span>All it does is list all currently open sessions in <span class='inlinecode'>fzf</span>, where one of them can be searched and selected through fuzzy find, and then either switch (if already inside a session) to the other session or attach to the other session (if not yet in Tmux).</span><br />
<br />
<span>You must install the <span class='inlinecode'>fzf</span> command on your computer for this to work. This is how it looks like:</span><br />
<br />
<a href='./terminal-multiplexing-with-tmux/tmux-session-fzf.png'><img alt='Tmux session fuzzy finder' title='Tmux session fuzzy finder' src='./terminal-multiplexing-with-tmux/tmux-session-fzf.png' /></a><br />
<br />
<h2 style='display: inline' id='the-tssh-alias---cluster-ssh-replacement'>The <span class='inlinecode'>tssh</span> alias - Cluster SSH replacement</h2><br />
<br />
<span>Before I used Tmux, I was a heavy user of ClusterSSH, which allowed me to log in to multiple servers at once in a single terminal window and type and run commands on all of them in parallel.</span><br />
<br />
<a class='textlink' href='https://github.com/duncs/clusterssh'>https://github.com/duncs/clusterssh</a><br />
<br />
<span>However, since I started using Tmux, I retired ClusterSSH, as it came with the benefit that Tmux only needs to be run in the terminal, whereas ClusterSSH spawned terminal windows, which aren&#39;t easily portable (e.g., from a Linux desktop to macOS). The <span class='inlinecode'>tmux::cluster_ssh</span> function can have N arguments, where:</span><br />
<br />
<ul>
<li>...the first argument will be the session name (see <span class='inlinecode'>tmux::tssh_from_argument</span> helper function), and all remaining arguments will be server hostnames/FQDNs to connect to simultaneously.</li>
<li>...or, the first argument is a file name, and the file contains a list of hostnames/FQDNs (see <span class='inlinecode'>tmux::ssh_from_file</span> helper function)</li>
</ul><br />
<span>This is the function definition behind the <span class='inlinecode'>tssh</span> alias:</span><br />
<span> </span><br />
<pre>
function tmux::cluster_ssh
    if test -f "$argv[1]"
        tmux::tssh_from_file $argv[1]
        return
    end
    tmux::tssh_from_argument $argv
end
</pre>
<br />
<span>This function is just a wrapper around the more complex <span class='inlinecode'>tmux::tssh_from_file</span> and <span class='inlinecode'>tmux::tssh_from_argument</span> functions, as you have learned already. Most of the magic happens there.</span><br />
<br />
<h3 style='display: inline' id='the-tmuxtsshfromargument-helper'>The <span class='inlinecode'>tmux::tssh_from_argument</span> helper</h3><br />
<br />
<span>This is the most magic helper function we will cover in this post. It looks like this:</span><br />
<br />
<pre>
function tmux::tssh_from_argument
    set -l session $argv[1]
    set first_server_or_container $argv[2]
    set remaining_servers $argv[3..-1]
    if test -z "$first_server_or_container"
        set first_server_or_container $session
    end

    tmux new-session -d -s $session (_tmux::connect_command "$first_server_or_container")
    if not tmux list-session | grep "^$session:"
        echo "Could not create session $session"
        return 2
    end
    for server_or_container in $remaining_servers
        tmux split-window -t $session "tmux select-layout tiled; $(_tmux::connect_command "$server_or_container")"
    end
    tmux setw -t $session synchronize-panes on
    tmux -2 attach-session -t $session || tmux -2 switch-client -t $session
end
</pre>
<br />
<span>It expects at least two arguments. The first argument is the session name to create for the clustered SSH session. All other arguments are server hostnames or FQDNs to which to connect. The first one is used to make the initial session. All remaining ones are added to that session with <span class='inlinecode'>tmux split-window -t $session...</span>. At the end, we enable synchronized panes by default, so whenever you type, the commands will be sent to every SSH connection, thus allowing the neat ClusterSSH feature to run commands on multiple servers simultaneously. Once done, we attach (or switch, if already in Tmux) to it.</span><br />
<br />
<span>Sometimes, I don&#39;t want the synchronized panes behavior and want to switch it off temporarily. I can do that with <span class='inlinecode'>prefix-key p</span> and <span class='inlinecode'>prefix-key P</span> after adding the following to my local <span class='inlinecode'>tmux.conf</span>:</span><br />
<br />
<pre>
bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on
</pre>
<br />
<h3 style='display: inline' id='the-tmuxtsshfromfile-helper'>The <span class='inlinecode'>tmux::tssh_from_file</span> helper</h3><br />
<br />
<span>This one sets the session name to the file name and then reads a list of servers from that file, passing the list of servers to <span class='inlinecode'>tmux::tssh_from_argument</span> as the arguments. So, this is a neat little wrapper that also enables me to open clustered SSH sessions from an input file.</span><br />
<br />
<pre>
function tmux::tssh_from_file
    set -l serverlist $argv[1]
    set -l session (basename $serverlist | cut -d. -f1)
    tmux::tssh_from_argument $session (awk &#39;{ print $1 }&#39; $serverlist | sed &#39;s/.lan./.lan/g&#39;)
end
</pre>
<br />
<h3 style='display: inline' id='tssh-examples'><span class='inlinecode'>tssh</span> examples</h3><br />
<br />
<span>To open a new session named <span class='inlinecode'>fish</span> and log in to 4 remote hosts, run this command (Note that it is also possible to specify the remote user):</span><br />
<br />
<pre>
$ tssh fish blowfish.buetow.org fishfinger.buetow.org \
    fishbone.buetow.org user@octopus.buetow.org
</pre>
<br />
<span>To open a new session named <span class='inlinecode'>manyservers</span>, put many servers (one FQDN per line) into a file called <span class='inlinecode'>manyservers.txt</span> and simply run:</span><br />
<br />
<pre>
$ tssh manyservers.txt
</pre>
<br />
<h3 style='display: inline' id='common-tmux-commands-i-use-in-tssh'>Common Tmux commands I use in <span class='inlinecode'>tssh</span></h3><br />
<br />
<span>These are default Tmux commands that I make heavy use of in a <span class='inlinecode'>tssh</span> session:</span><br />
<br />
<ul>
<li>Press <span class='inlinecode'>prefix-key DIRECTION</span> to switch panes. DIRECTION is by default any of the arrow keys, but I also configured Vi keybindings.</li>
<li>Press <span class='inlinecode'>prefix-key &lt;space&gt;</span> to change the pane layout (can be pressed multiple times to cycle through them).</li>
<li>Press <span class='inlinecode'>prefix-key z</span> to zoom in and out of the current active pane.</li>
</ul><br />
<h2 style='display: inline' id='copy-and-paste-workflow'>Copy and paste workflow</h2><br />
<br />
<span>As you will see later in this blog post, I have configured a history limit of 1 million items in Tmux so that I can scroll back quite far. One main workflow of mine is to search for text in the Tmux history, select and copy it, and then switch to another window or session and paste it there (e.g., into my text editor to do something with it).</span><br />
<br />
<span>This works by pressing <span class='inlinecode'>prefix-key [</span> to enter Tmux copy mode. From there, I can browse the Tmux history of the current window using either the arrow keys or vi-like navigation (see vi configuration later in this blog post) and the Pg-Dn and Pg-Up keys.</span><br />
<br />
<span>I often search the history backwards with <span class='inlinecode'>prefix-key [</span> followed by a <span class='inlinecode'>?</span>, which opens the Tmux history search prompt.</span><br />
<br />
<span>Once I have identified the terminal text to be copied, I enter visual select mode with <span class='inlinecode'>v</span>, highlight all the text to be copied (using arrow keys or Vi motions), and press <span class='inlinecode'>y</span> to yank it (sorry if this all sounds a bit complicated, but Vim/NeoVim users will know this, as it is pretty much how you do it there as well).</span><br />
<br />
<span>For <span class='inlinecode'>v</span> and <span class='inlinecode'>y</span> to work, the following has to be added to the Tmux configuration file: </span><br />
<br />
<pre>
bind-key -T copy-mode-vi &#39;v&#39; send -X begin-selection
bind-key -T copy-mode-vi &#39;y&#39; send -X copy-selection-and-cancel
</pre>
<br />
<span>Once the text is yanked, I switch to another Tmux window or session where, for example, a text editor is running and paste the yanked text from Tmux into the editor with <span class='inlinecode'>prefix-key ]</span>. Note that when pasting into a modal text editor like Vi or Helix, you would first need to enter insert mode before <span class='inlinecode'>prefix-key ]</span> would paste anything.</span><br />
<br />
<h2 style='display: inline' id='tmux-configurations'>Tmux configurations</h2><br />
<br />
<span>Some features I have configured directly in Tmux don&#39;t require an external shell alias to function correctly. Let&#39;s walk line by line through my local <span class='inlinecode'>~/.config/tmux/tmux.conf</span>:</span><br />
<br />
<pre>
source ~/.config/tmux/tmux.local.conf

set-option -g allow-rename off
set-option -g history-limit 100000
set-option -g status-bg &#39;#444444&#39;
set-option -g status-fg &#39;#ffa500&#39;
set-option -s escape-time 0
</pre>
<br />
<span>There&#39;s yet to be much magic happening here. I source a <span class='inlinecode'>tmux.local.conf</span>, which I sometimes use to override the default configuration that comes from the configuration management system. But it is mostly just an empty file, so it doesn&#39;t throw any errors on Tmux startup when I don&#39;t use it.</span><br />
<br />
<span>I work with many terminal outputs, which I also like to search within Tmux. So, I added a large enough <span class='inlinecode'>history-limit</span>, enabling me to search backwards in Tmux for any output up to a million lines of text.</span><br />
<br />
<span>Besides changing some colours (personal taste), I also set <span class='inlinecode'>escape-time</span> to <span class='inlinecode'>0</span>, which is just a workaround. Otherwise, my Helix text editor&#39;s <span class='inlinecode'>ESC</span> key would take ages to trigger within Tmux. I am trying to remember the gory details. You can leave it out; if everything works fine for you, leave it out.</span><br />
<br />
<span>The next lines in the configuration file are:</span><br />
<br />
<pre>
set-window-option -g mode-keys vi
bind-key -T copy-mode-vi &#39;v&#39; send -X begin-selection
bind-key -T copy-mode-vi &#39;y&#39; send -X copy-selection-and-cancel
</pre>
<br />
<span>I navigate within Tmux using Vi keybindings, so the <span class='inlinecode'>mode-keys</span> is set to <span class='inlinecode'>vi</span>. I use the Helix modal text editor, which is close enough to Vi bindings for simple navigation to feel "native" to me. (By the way, I have been a long-time Vim and NeoVim user, but I eventually switched to Helix. It&#39;s off-topic here, but it may be worth another blog post once.)</span><br />
<br />
<span>The two <span class='inlinecode'>bind-key</span> commands make it so that I can use <span class='inlinecode'>v</span> and <span class='inlinecode'>y</span> in copy mode, which feels more Vi-like (as already discussed earlier in this post).</span><br />
<br />
<span>The next set of lines in the configuration file are:</span><br />
<br />
<pre>
bind-key h select-pane -L
bind-key j select-pane -D
bind-key k select-pane -U
bind-key l select-pane -R

bind-key H resize-pane -L 5
bind-key J resize-pane -D 5
bind-key K resize-pane -U 5
bind-key L resize-pane -R 5
</pre>
<br />
<span>These allow me to use <span class='inlinecode'>prefix-key h</span>, <span class='inlinecode'>prefix-key j</span>, <span class='inlinecode'>prefix-key k</span>, and <span class='inlinecode'>prefix-key l</span> for switching panes and <span class='inlinecode'>prefix-key H</span>, <span class='inlinecode'>prefix-key J</span>, <span class='inlinecode'>prefix-key K</span>, and <span class='inlinecode'>prefix-key L</span> for resizing the panes. If you don&#39;t know Vi/Vim/NeoVim, the letters <span class='inlinecode'>hjkl</span> are commonly used there for left, down, up, and right, which is also the same for Helix, by the way.</span><br />
<br />
<span>The next set of lines in the configuration file are:</span><br />
<br />
<pre>
bind-key c new-window -c &#39;#{pane_current_path}&#39;
bind-key F new-window -n "session-switcher" "tmux list-sessions | fzf | cut -d: -f1 | xargs tmux switch-client -t"
bind-key T choose-tree
</pre>
<br />
<span>The first one is that any new window starts in the current directory. The second one is more interesting. I list all open sessions in the fuzzy finder. I rely heavily on this during my daily workflow to switch between various sessions depending on the task. E.g. from a remote cluster SSH session to a local code editor. </span><br />
<br />
<span>The third one, <span class='inlinecode'>choose-tree</span>, opens a tree view in Tmux listing all sessions and windows. This one is handy to get a better overview of what is currently running in any local Tmux session. It looks like this (it also allows me to press a hotkey to switch to a particular Tmux window):</span><br />
<br />
<a href='./terminal-multiplexing-with-tmux/tmux-tree-view.png'><img alt='Tmux sessiont tree view' title='Tmux sessiont tree view' src='./terminal-multiplexing-with-tmux/tmux-tree-view.png' /></a><br />
<br />
<span>The last remaining lines in my configuration file are:</span><br />
<span>  </span><br />
<pre>
bind-key p setw synchronize-panes off
bind-key P setw synchronize-panes on
bind-key r source-file ~/.config/tmux/tmux.conf \; display-message "tmux.conf reloaded"
</pre>
<br />
<span>We discussed <span class='inlinecode'>synchronized panes</span> earlier. I use it all the time in clustered SSH sessions. When enabled, all panes (remote SSH sessions) receive the same keystrokes. This is very useful when you want to run the same commands on many servers at once, such as navigating to a common directory, restarting a couple of services at once, or running tools like <span class='inlinecode'>htop</span> to quickly monitor system resources.</span><br />
<br />
<span>The last one reloads my Tmux configuration on the fly.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2026-02-02-tmux-popup-editor-for-cursor-agent-prompts.html'>2026-02-02 A tmux popup editor for Cursor Agent CLI prompts</a><br />
<a class='textlink' href='./2025-05-02-terminal-multiplexing-with-tmux-fish-edition.html'>2025-05-02 Terminal multiplexing with <span class='inlinecode'>tmux</span> - Fish edition (You are currently reading this)</a><br />
<a class='textlink' href='./2024-06-23-terminal-multiplexing-with-tmux.html'>2024-06-23 Terminal multiplexing with <span class='inlinecode'>tmux</span> - Z-Shell edition</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>'When: The Scientific Secrets of Perfect Timing' book notes</title>
        <link href="https://foo.zone/gemfeed/2025-04-19-when-book-notes.html" />
        <id>https://foo.zone/gemfeed/2025-04-19-when-book-notes.html</id>
        <updated>2025-04-19T10:26:05+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>These are my personal book notes from Daniel Pink's 'When: The Scientific Secrets of Perfect Timing.' They are for me, but I hope they might be useful to you too.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='when-the-scientific-secrets-of-perfect-timing-book-notes'>"When: The Scientific Secrets of Perfect Timing" book notes</h1><br />
<br />
<span class='quote'>Published at 2025-04-19T10:26:05+03:00</span><br />
<br />
<span>These are my personal book notes from Daniel Pink&#39;s "When: The Scientific Secrets of Perfect Timing." They are for me, but I hope they might be useful to you too.</span><br />
<br />
<pre>
	  __
 (`/\
 `=\/\ __...--~~~~~-._   _.-~~~~~--...__
  `=\/\               \ /               \\
   `=\/                V                 \\
   //_\___--~~~~~~-._  |  _.-~~~~~~--...__\\
  //  ) (..----~~~~._\ | /_.~~~~----.....__\\
 ===( INK )==========\\|//====================
__ejm\___/________dwb`---`______________________
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#when-the-scientific-secrets-of-perfect-timing-book-notes'>"When: The Scientific Secrets of Perfect Timing" book notes</a></li>
<li>⇢ <a href='#daily-rhythms'>Daily Rhythms</a></li>
<li>⇢ <a href='#optimal-task-timing'>Optimal Task Timing</a></li>
<li>⇢ <a href='#exercise-timing'>Exercise Timing</a></li>
<li>⇢ <a href='#drinking-habits'>Drinking Habits</a></li>
<li>⇢ <a href='#afternoon-challenges-bermuda-triangle'>Afternoon Challenges ("Bermuda Triangle")</a></li>
<li>⇢ <a href='#breaks-and-productivity'>Breaks and Productivity</a></li>
<li>⇢ <a href='#napping'>Napping</a></li>
<li>⇢ <a href='#scheduling-breaks'>Scheduling Breaks</a></li>
<li>⇢ <a href='#final-impressions'>Final Impressions</a></li>
<li>⇢ <a href='#the-midlife-u-curve'>The Midlife U Curve</a></li>
<li>⇢ <a href='#project-management-tips'>Project Management Tips</a></li>
</ul><br />
<span>You are a different kind of organism based on the time of day. For example, school tests show worse results later in the day, especially if there are fewer computers than students available. Every person has a chronotype, such as a late or early peaker, or somewhere in the middle (like most people). You can assess your chronotype here:</span><br />
<br />
<a class='textlink' href='https://www.danpink.com/mctq/'>Chronotype Assessment</a><br />
<br />
<span>Following your chronotype can lead to more happiness and higher job satisfaction.</span><br />
<br />
<h2 style='display: inline' id='daily-rhythms'>Daily Rhythms</h2><br />
<br />
<span>Peak, Trough, Rebound (Recovery): Most people experience these periods throughout the day. It&#39;s best to "eat the frog" or tackle daunting tasks during the peak. A twin peak exists every day, with mornings and early evenings being optimal for most people. Negative moods follow the opposite pattern, peaking in the afternoon. Light helps adjust but isn&#39;t the main driver of our internal clock. Like plants, humans have intrinsic rhythms.</span><br />
<br />
<h2 style='display: inline' id='optimal-task-timing'>Optimal Task Timing</h2><br />
<br />
<ul>
<li>Analytical work requiring sharpness and focus is best at the peak.</li>
<li>Creative work is more effective during non-peak times.</li>
<li>Biorhythms can sway performance by up to twenty percent.</li>
</ul><br />
<h2 style='display: inline' id='exercise-timing'>Exercise Timing</h2><br />
<br />
<span>Exercise in the morning to lose weight; you burn up to twenty percent more fat if you exercise before eating. Exercising after eating aids muscle gain, using the energy from the food. Morning exercises elevate mood, with the effect lasting all day. They also make forming a habit easier. The late afternoon is best for athletic performance due to optimal body temperature, reducing injury risk.</span><br />
<br />
<h2 style='display: inline' id='drinking-habits'>Drinking Habits</h2><br />
<br />
<ul>
<li>Drink water in the morning to counter mild dehydration upon waking.</li>
<li>Delay coffee consumption until cortisol production peaks an hour or 90 minutes after waking. This helps avoid caffeine resistance.</li>
<li>For an afternoon boost, have coffee once cortisol levels drop.</li>
</ul><br />
<h2 style='display: inline' id='afternoon-challenges-bermuda-triangle'>Afternoon Challenges ("Bermuda Triangle")</h2><br />
<br />
<ul>
<li>Mistakes are more common in hospitals during this period, like incorrect antibiotic subscriptions or missed handwashing.</li>
<li>Traffic accidents and unfavorable judge decisions occur more frequently in the afternoon.</li>
<li>2:55 pm is the least productive time of the day.</li>
</ul><br />
<h2 style='display: inline' id='breaks-and-productivity'>Breaks and Productivity</h2><br />
<br />
<span>Short, restorative breaks enhance performance. Student exam results improved with a half-hour break beforehand. Even micro-breaks can be beneficial—hourly five-minute walking breaks can increase productivity as much as 30-minute walks. Nature-based breaks are more effective than indoor ones, and full detachment in breaks is essential for restoration. Physical activity during breaks boosts concentration and productivity more than long walks do. Complete detachment from work during breaks is critical.</span><br />
<br />
<h2 style='display: inline' id='napping'>Napping</h2><br />
<br />
<span>Short naps (10-20 minutes) significantly enhance mood, alertness, and cognitive performance, improving learning and problem-solving abilities. Napping increases with age, benefiting mood, flow, and overall health. A "nappuccino," or napping after coffee, offers a double boost, as caffeine takes around 25 minutes to kick in.</span><br />
<br />
<h2 style='display: inline' id='scheduling-breaks'>Scheduling Breaks</h2><br />
<br />
<ul>
<li>Track breaks just as you do with tasks—aim for three breaks a day.</li>
<li>Every 25 minutes, look away and daydream for 20 seconds, or engage in short exercises.</li>
<li>Meditating for even three minutes is a highly effective restorative activity.</li>
<li>The "Fresh Start Effect" (e.g., beginning a diet on January 1st or a new week) impacts motivation, as does recognizing progress. At the end of each day, spends two minutes to write down accomplishments.</li>
</ul><br />
<h2 style='display: inline' id='final-impressions'>Final Impressions</h2><br />
<br />
<ul>
<li>The concluding experience of a vacation significantly influences overall memories.</li>
<li>Restaurant reviews often hinge on the end of the visit, highlighting extras like wrong bills or additional desserts.</li>
<li>Considering one&#39;s older future self can motivate improvements in the present.</li>
</ul><br />
<h2 style='display: inline' id='the-midlife-u-curve'>The Midlife U Curve</h2><br />
<br />
<span>Life satisfaction tends to dip in midlife, around the forties, but increases around age 54.</span><br />
<br />
<h2 style='display: inline' id='project-management-tips'>Project Management Tips</h2><br />
<br />
<ul>
<li>Halfway through a project, there&#39;s a concentrated work effort ("Oh Oh Effect"), similar to an alarm when slightly behind schedule.</li>
<li>Recognizing daily accomplishments can elevate motivation and satisfaction.</li>
</ul><br />
<span>These insights from "When" can guide actions to optimize performance, well-being, and satisfaction across various aspects of life.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other book notes of mine are:</span><br />
<br />
<a class='textlink' href='./2025-11-02-the-courage-to-be-disliked-book-notes.html'>2025-11-02 &#39;The Courage To Be Disliked&#39; book notes</a><br />
<a class='textlink' href='./2025-06-07-a-monks-guide-to-happiness-book-notes.html'>2025-06-07 &#39;A Monk&#39;s Guide to Happiness&#39; book notes</a><br />
<a class='textlink' href='./2025-04-19-when-book-notes.html'>2025-04-19 &#39;When: The Scientific Secrets of Perfect Timing&#39; book notes (You are currently reading this)</a><br />
<a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 &#39;Staff Engineer&#39; book notes</a><br />
<a class='textlink' href='./2024-07-07-the-stoic-challenge-book-notes.html'>2024-07-07 &#39;The Stoic Challenge&#39; book notes</a><br />
<a class='textlink' href='./2024-05-01-slow-productivity-book-notes.html'>2024-05-01 &#39;Slow Productivity&#39; book notes</a><br />
<a class='textlink' href='./2023-11-11-mind-management-book-notes.html'>2023-11-11 &#39;Mind Management&#39; book notes</a><br />
<a class='textlink' href='./2023-07-17-career-guide-and-soft-skills-book-notes.html'>2023-07-17 &#39;Software Developers Career Guide and Soft Skills&#39; book notes</a><br />
<a class='textlink' href='./2023-05-06-the-obstacle-is-the-way-book-notes.html'>2023-05-06 &#39;The Obstacle is the Way&#39; book notes</a><br />
<a class='textlink' href='./2023-04-01-never-split-the-difference-book-notes.html'>2023-04-01 &#39;Never split the difference&#39; book notes</a><br />
<a class='textlink' href='./2023-03-16-the-pragmatic-programmer-book-notes.html'>2023-03-16 &#39;The Pragmatic Programmer&#39; book notes</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</title>
        <link href="https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html" />
        <id>https://foo.zone/gemfeed/2025-04-05-f3s-kubernetes-with-freebsd-part-4.html</id>
        <updated>2025-04-04T23:21:01+03:00, last updated Fri 26 Dec 08:51:06 EET 2025</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</h1><br />
<br />
<span class='quote'>Published at 2025-04-04T23:21:01+03:00, last updated Fri 26 Dec 08:51:06 EET 2025</span><br />
<br />
<span>This is the fourth blog post about the f3s series for self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution used on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-4-rocky-linux-bhyve-vms'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</a></li>
<li>⇢ <a href='#basic-bhyve-setup'>Basic Bhyve setup</a></li>
<li>⇢ <a href='#rocky-linux-vms'>Rocky Linux VMs</a></li>
<li>⇢ ⇢ <a href='#iso-download'>ISO download</a></li>
<li>⇢ ⇢ <a href='#vm-configuration'>VM configuration</a></li>
<li>⇢ ⇢ <a href='#vm-installation'>VM installation</a></li>
<li>⇢ ⇢ <a href='#increase-of-the-disk-image'>Increase of the disk image</a></li>
<li>⇢ ⇢ <a href='#connect-to-vnc'>Connect to VNC</a></li>
<li>⇢ <a href='#after-install'>After install</a></li>
<li>⇢ ⇢ <a href='#vm-auto-start-after-host-reboot'>VM auto-start after host reboot</a></li>
<li>⇢ ⇢ <a href='#static-ip-configuration'>Static IP configuration</a></li>
<li>⇢ ⇢ <a href='#permitting-root-login'>Permitting root login</a></li>
<li>⇢ ⇢ <a href='#install-latest-updates'>Install latest updates</a></li>
<li>⇢ <a href='#stress-testing-cpu'>Stress testing CPU</a></li>
<li>⇢ ⇢ <a href='#silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</a></li>
<li>⇢ ⇢ <a href='#silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</a></li>
<li>⇢ ⇢ <a href='#silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</a></li>
<li>⇢ <a href='#benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></a></li>
<li>⇢ ⇢ <a href='#freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</a></li>
<li>⇢ ⇢ <a href='#freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li>
<li>⇢ ⇢ <a href='#rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</a></li>
<li>⇢ <a href='#update-improving-disk-io-performance-for-etcd'>Update: Improving Disk I/O Performance for etcd</a></li>
<li>⇢ ⇢ <a href='#the-problem'>The Problem</a></li>
<li>⇢ ⇢ <a href='#the-solution-switch-to-nvme-emulation'>The Solution: Switch to NVMe Emulation</a></li>
<li>⇢ ⇢ <a href='#step-1-prepare-the-guest-os'>Step 1: Prepare the Guest OS</a></li>
<li>⇢ ⇢ <a href='#step-2-update-the-bhyve-configuration'>Step 2: Update the Bhyve Configuration</a></li>
<li>⇢ ⇢ <a href='#benchmark-results'>Benchmark Results</a></li>
<li>⇢ ⇢ <a href='#important-notes'>Important Notes</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In this blog post, we are going to install the Bhyve hypervisor.</span><br />
<br />
<span>The FreeBSD Bhyve hypervisor is a lightweight, modern hypervisor that enables virtualization on FreeBSD systems. Bhyve&#39;s strengths include its minimal overhead, which allows it to achieve near-native performance for virtual machines. It&#39;s efficient and lightweight, leveraging the capabilities of the FreeBSD operating system for performance and network management.</span><br />
<br />
<a class='textlink' href='https://wiki.freebsd.org/bhyve'>https://wiki.freebsd.org/bhyve</a><br />
<br />
<span>Bhyve supports running various guest operating systems, including FreeBSD, Linux, and Windows, on hardware platforms that support hardware virtualization extensions (such as Intel VT-x or AMD-V). In our case, we are going to virtualize Rocky Linux, which will later in this series be used to run k3s.</span><br />
<br />
<h2 style='display: inline' id='check-for-popcnt-cpu-support'>Check for <span class='inlinecode'>POPCNT</span> CPU support</h2><br />
<br />
<span>POPCNT is a CPU instruction that counts the number of set bits (ones) in a binary number. CPU virtualization and Bhyve support for the POPCNT instruction are important because guest operating systems utilize this instruction to perform various tasks more efficiently. If the host CPU supports POPCNT, Bhyve can pass this capability to virtual machines for better performance. Without POPCNT support, some applications might not run or perform sub-optimally in virtualized environments.</span><br />
<br />
<span>To check for <span class='inlinecode'>POPCNT</span> support, run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % dmesg | grep <font color="#808080">'Features2=.*POPCNT'</font>
  Features2=<font color="#000000">0x7ffafbbf</font>&lt;SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,SDBG,
	FMA,CX16,xTPR,PDCM,PCID,SSE4.<font color="#000000">1</font>,SSE4.<font color="#000000">2</font>,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,
	OSXSAVE,AVX,F16C,RDRAND&gt;
</pre>
<br />
<span>So it&#39;s there! All good.</span><br />
<br />
<h2 style='display: inline' id='basic-bhyve-setup'>Basic Bhyve setup</h2><br />
<br />
<span>For managing the Bhyve VMs, we are using <span class='inlinecode'>vm-bhyve</span>, a tool not part of the FreeBSD operating system but available as a ready-to-use package. It eases VM management and reduces a lot of overhead. We also install the required package to make Bhyve work with the UEFI firmware.</span><br />
<br />
<a class='textlink' href='https://github.com/churchers/vm-bhyve'>https://github.com/churchers/vm-bhyve</a><br />
<br />
<span>The following commands are executed on all three hosts <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span>, and <span class='inlinecode'>f2</span>, where <span class='inlinecode'>re0</span> is the name of the Ethernet interface (which may need to be adjusted if your hardware is different):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install vm-bhyve bhyve-firmware
paul@f0:~ % doas sysrc vm_enable=YES
vm_enable:  -&gt; YES
paul@f0:~ % doas sysrc vm_dir=zfs:zroot/bhyve
vm_dir:  -&gt; zfs:zroot/bhyve
paul@f0:~ % doas zfs create zroot/bhyve
paul@f0:~ % doas vm init
paul@f0:~ % doas vm switch create public
paul@f0:~ % doas vm switch add public re0
</pre>
<br />
<span>Bhyve stores all its data in the <span class='inlinecode'>/bhyve</span> of the <span class='inlinecode'>zroot</span> ZFS pool:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % zfs list | grep bhyve
zroot/bhyve                                   <font color="#000000">1</font>.74M   453G  <font color="#000000">1</font>.74M  /zroot/bhyve
</pre>
<br />
<span>For convenience, we also create this symlink:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas ln -s /zroot/bhyve/ /bhyve

</pre>
<br />
<span>Now, Bhyve is ready to rumble, but no VMs are there yet:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm list
NAME  DATASTORE  LOADER  CPU  MEMORY  VNC  AUTO  STATE
</pre>
<br />
<h2 style='display: inline' id='rocky-linux-vms'>Rocky Linux VMs</h2><br />
<br />
<span>As guest VMs I decided to use Rocky Linux.</span><br />
<br />
<span>Using Rocky Linux 9 as a VM-based OS is beneficial primarily because of its long-term support and stable release cycle. This ensures a reliable environment that receives security updates and bug fixes for an extended period, reducing the need for frequent upgrades.</span><br />
<br />
<span>Rocky Linux is community-driven and aims to be fully compatible with enterprise Linux, making it a solid choice for consistency and performance in various deployment scenarios.</span><br />
<br />
<a class='textlink' href='https://rockylinux.org/'>https://rockylinux.org/</a><br />
<br />
<h3 style='display: inline' id='iso-download'>ISO download</h3><br />
<br />
<span>We&#39;re going to install the Rocky Linux from the latest minimal iso:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm iso \
 https://download.rockylinux.org/pub/rocky/<font color="#000000">9</font>/isos/x86_64/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso
/zroot/bhyve/.iso/Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso        <font color="#000000">1808</font> MB <font color="#000000">4780</font> kBps 06m28s
paul@f0:/bhyve % doas vm create rocky
</pre>
<br />
<h3 style='display: inline' id='vm-configuration'>VM configuration</h3><br />
<br />
<span>The default Bhyve VM configuration looks like this now:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/bhyve/rocky % cat rocky.conf
loader=<font color="#808080">"bhyveload"</font>
cpu=<font color="#000000">1</font>
memory=256M
network0_type=<font color="#808080">"virtio-net"</font>
network0_switch=<font color="#808080">"public"</font>
disk0_type=<font color="#808080">"virtio-blk"</font>
disk0_name=<font color="#808080">"disk0.img"</font>
uuid=<font color="#808080">"1c4655ac-c828-11ef-a920-e8ff1ed71ca0"</font>
network0_mac=<font color="#808080">"58:9c:fc:0d:13:3f"</font>
</pre>
<br />
<span>The <span class='inlinecode'>uuid</span> and the <span class='inlinecode'>network0_mac</span> differ for each of the three VMs (the ones being installed on <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>).</span><br />
<br />
<span>But to make Rocky Linux boot it (plus some other adjustments, e.g. as we intend to run the majority of the workload in the k3s cluster running on those Linux VMs, we give them beefy specs like 4 CPU cores and 14GB RAM). So we run <span class='inlinecode'>doas vm configure rocky</span> and modified it to:</span><br />
<br />
<pre>
guest="linux"
loader="uefi"
uefi_vars="yes"
cpu=4
memory=14G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
graphics="yes"
graphics_vga=io
uuid="1c45400b-c828-11ef-8871-e8ff1ed71cac"
network0_mac="58:9c:fc:0d:13:3f"
</pre>
<br />
<h3 style='display: inline' id='vm-installation'>VM installation</h3><br />
<br />
<span>To start the installer from the downloaded ISO, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso
Starting rocky
  * found guest <b><u><font color="#000000">in</font></u></b> /zroot/bhyve/rocky
  * booting...

paul@f0:/bhyve/rocky % doas vm list
NAME   DATASTORE  LOADER  CPU  MEMORY  VNC           AUTO  STATE
rocky  default    uefi    <font color="#000000">4</font>    14G     <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font>  No    Locked (f0.lan.buetow.org)

paul@f0:/bhyve/rocky % doas sockstat -<font color="#000000">4</font> | grep <font color="#000000">5900</font>
root     bhyve       <font color="#000000">6079</font> <font color="#000000">8</font>   tcp4   *:<font color="#000000">5900</font>                *:*
</pre>
<br />
<span>Port 5900 now also opens for VNC connections, so I connected it with a VNC client and ran through the installation dialogues. This could be done unattended or more automated, but there are only three VMs to install, and the automation doesn&#39;t seem worth it as we do it only once a year or less often.</span><br />
<br />
<h3 style='display: inline' id='increase-of-the-disk-image'>Increase of the disk image</h3><br />
<br />
<span>By default, the VM disk image is only 20G, which is a bit small for our purposes, so we have to stop the VMs again, run <span class='inlinecode'>truncate</span> on the image file to enlarge them to 100G, and restart the installation:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/bhyve/rocky % doas vm stop rocky
paul@f0:/bhyve/rocky % doas truncate -s 100G disk0.img
paul@f0:/bhyve/rocky % doas vm install rocky Rocky-<font color="#000000">9.5</font>-x86_64-minimal.iso
</pre>
<br />
<h3 style='display: inline' id='connect-to-vnc'>Connect to VNC</h3><br />
<br />
<span>For the installation, I opened the VNC client on my Fedora laptop (GNOME comes with a simple VNC client) and manually ran through the base installation for each of the VMs. Again, I am sure this could have been automated a bit more, but there were just three VMs, and it wasn&#39;t worth the effort. The three VNC addresses of the VMs were <span class='inlinecode'>vnc://f0:5900</span>, <span class='inlinecode'>vnc://f1:5900</span>, and <span class='inlinecode'>vnc://f2:5900</span>.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-4/1.png'><img src='./f3s-kubernetes-with-freebsd-part-4/1.png' /></a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-4/2.png'><img src='./f3s-kubernetes-with-freebsd-part-4/2.png' /></a><br />
<br />
<span>I primarily selected the default settings (auto partitioning on the 100GB drive and a root user password). After the installation, the VMs were rebooted.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-4/3.png'><img src='./f3s-kubernetes-with-freebsd-part-4/3.png' /></a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-4/4.png'><img src='./f3s-kubernetes-with-freebsd-part-4/4.png' /></a><br />
<br />
<h2 style='display: inline' id='after-install'>After install</h2><br />
<br />
<span>We perform the following steps for all three VMs. In the following, the examples are all executed on <span class='inlinecode'>f0</span> (the VM <span class='inlinecode'>r0</span> running on <span class='inlinecode'>f0</span>):</span><br />
<br />
<h3 style='display: inline' id='vm-auto-start-after-host-reboot'>VM auto-start after host reboot</h3><br />
<br />
<span>To automatically start the VM on the servers, we add the following to the <span class='inlinecode'>rc.conf</span> on the FreeBSD hosts:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/bhyve/rocky % cat &lt;&lt;END | doas tee -a /etc/rc.conf
vm_list=<font color="#808080">"rocky"</font>
vm_delay=<font color="#808080">"5"</font>
</pre>
<br />
<span>The <span class='inlinecode'>vm_delay</span> isn&#39;t really required. It is used to wait 5 seconds before starting each VM, but there is currently only one VM per host. Maybe later, when there are more, this will be useful. After adding, there&#39;s now a <span class='inlinecode'>Yes</span> indicator in the <span class='inlinecode'>AUTO</span> column.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm list
NAME   DATASTORE  LOADER  CPU  MEMORY  VNC           AUTO     STATE
rocky  default    uefi    <font color="#000000">4</font>    14G     <font color="#000000">0.0</font>.<font color="#000000">0.0</font>:<font color="#000000">5900</font>  Yes [<font color="#000000">1</font>]  Running (<font color="#000000">2063</font>)
</pre>
<br />
<h3 style='display: inline' id='static-ip-configuration'>Static IP configuration</h3><br />
<br />
<span>After that, we change the network configuration of the VMs to be static (from DHCP) here. As per the previous post of this series, the three FreeBSD hosts were already in my <span class='inlinecode'>/etc/hosts</span> file:</span><br />
<br />
<pre>
192.168.1.130 f0 f0.lan f0.lan.buetow.org
192.168.1.131 f1 f1.lan f1.lan.buetow.org
192.168.1.132 f2 f2.lan f2.lan.buetow.org
</pre>
<br />
<span>For the Rocky VMs, we add those to the FreeBSD host systems as well:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/bhyve/rocky % cat &lt;&lt;END | doas tee -a /etc/hosts
<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org
END
</pre>
<br />
<span>And we configure the IPs accordingly on the VMs themselves by opening a root shell via SSH to the VMs and entering the following commands on each of the VMs:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] % nmcli connection modify enp0s5 ipv4.address <font color="#000000">192.168</font>.<font color="#000000">1.120</font>/<font color="#000000">24</font>
[root@r0 ~] % nmcli connection modify enp0s5 ipv4.gateway <font color="#000000">192.168</font>.<font color="#000000">1.1</font>
[root@r0 ~] % nmcli connection modify enp0s5 ipv4.DNS <font color="#000000">192.168</font>.<font color="#000000">1.1</font>
[root@r0 ~] % nmcli connection modify enp0s5 ipv4.method manual
[root@r0 ~] % nmcli connection down enp0s5
[root@r0 ~] % nmcli connection up enp0s5
[root@r0 ~] % hostnamectl set-hostname r0.lan.buetow.org
[root@r0 ~] % cat &lt;&lt;END &gt;&gt;/etc/hosts
<font color="#000000">192.168</font>.<font color="#000000">1.120</font> r0 r0.lan r0.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.121</font> r1 r1.lan r1.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.122</font> r2 r2.lan r2.lan.buetow.org
END
</pre>
<br />
<span>Whereas:</span><br />
<br />
<ul>
<li><span class='inlinecode'>192.168.1.120</span> is the IP of the VM itself (here: <span class='inlinecode'>r0.lan.buetow.org</span>)</li>
<li><span class='inlinecode'>192.168.1.1</span> is the address of my home router, which also does DNS.</li>
</ul><br />
<h3 style='display: inline' id='permitting-root-login'>Permitting root login</h3><br />
<br />
<span>As these VMs aren&#39;t directly reachable via SSH from the internet, we enable <span class='inlinecode'>root</span> login by adding a line with <span class='inlinecode'>PermitRootLogin yes</span> to <span class='inlinecode'>/etc/sshd/sshd_config</span>.</span><br />
<br />
<span>Once done, we reboot the VM by running <span class='inlinecode'>reboot</span> inside the VM to test whether everything was configured and persisted correctly.</span><br />
<br />
<span>After reboot, we copy a public key over. E.g. I did this from my Laptop as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>% <b><u><font color="#000000">for</font></u></b> i <b><u><font color="#000000">in</font></u></b> <font color="#000000">0</font> <font color="#000000">1</font> <font color="#000000">2</font>; <b><u><font color="#000000">do</font></u></b> ssh-copy-id root@r$i.lan.buetow.org; <b><u><font color="#000000">done</font></u></b>
</pre>
<br />
<span>Then, we edit the <span class='inlinecode'>/etc/ssh/sshd_config</span> file again on all three VMs and configure <span class='inlinecode'>PasswordAuthentication no</span> to only allow SSH key authentication from now on.</span><br />
<br />
<h3 style='display: inline' id='install-latest-updates'>Install latest updates</h3><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~] % dnf update
[root@r0 ~] % reboot
</pre>
<br />
<h2 style='display: inline' id='stress-testing-cpu'>Stress testing CPU</h2><br />
<br />
<span>The aim is to prove that bhyve VMs are CPU efficient. As I could not find an off-the-shelf benchmarking tool available in the same version for FreeBSD as well as for Rocky Linux 9, I wrote my own silly CPU benchmarking tool in Go:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> main

<b><u><font color="#000000">import</font></u></b> <font color="#808080">"testing"</font>

<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly1(b *testing.B) {
	<b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i &lt; b.N; i++ {
		_ = i * i
	}
}

<b><u><font color="#000000">func</font></u></b> BenchmarkCPUSilly2(b *testing.B) {
	<b><u><font color="#000000">var</font></u></b> sillyResult <b><font color="#000000">float64</font></b>
	<b><u><font color="#000000">for</font></u></b> i := <font color="#000000">0</font>; i &lt; b.N; i++ {
		sillyResult += <b><font color="#000000">float64</font></b>(i)
		sillyResult *= <b><font color="#000000">float64</font></b>(i)
		divisor := <b><font color="#000000">float64</font></b>(i) + <font color="#000000">1</font>
		<b><u><font color="#000000">if</font></u></b> divisor &gt; <font color="#000000">0</font> {
			sillyResult /= divisor
		}
	}
	_ = sillyResult <i><font color="silver">// to avoid compiler optimization</font></i>
}
</pre>
<br />
<span>You can find the repository here:</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/sillybench'>https://codeberg.org/snonux/sillybench</a><br />
<br />
<h3 style='display: inline' id='silly-freebsd-host-benchmark'>Silly FreeBSD host benchmark</h3><br />
<br />
<span>To install it on FreeBSD, we run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install git go
paul@f0:~ % mkdir ~/git &amp;&amp; cd ~/git &amp;&amp; \
  git clone https://codeberg.org/snonux/sillybench &amp;&amp; \
  cd sillybench
</pre>
<br />
<span>And to run it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~/git/sillybench % go version
go version go1.<font color="#000000">24.1</font> freebsd/amd<font color="#000000">64</font>

paul@f0:~/git/sillybench % go <b><u><font color="#000000">test</font></u></b> -bench=.
goos: freebsd
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1-<font color="#000000">4</font>    <font color="#000000">1000000000</font>               <font color="#000000">0.4022</font> ns/op
BenchmarkCPUSilly2-<font color="#000000">4</font>    <font color="#000000">1000000000</font>               <font color="#000000">0.4027</font> ns/op
PASS
ok      codeberg.org/snonux/sillybench <font color="#000000">0</font>.891s
</pre>
<br />
<h3 style='display: inline' id='silly-rocky-linux-vm--bhyve-benchmark'>Silly Rocky Linux VM @ Bhyve benchmark</h3><br />
<br />
<span>OK, let&#39;s compare this with the Rocky Linux VM running on Bhyve:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># dnf install golang git</font></i>
[root@r0 ~]<i><font color="silver"># mkdir ~/git &amp;&amp; cd ~/git &amp;&amp; \</font></i>
  git clone https://codeberg.org/snonux/sillybench &amp;&amp; \
  cd sillybench
</pre>
<br />
<span>And to run it:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 sillybench]<i><font color="silver"># go version</font></i>
go version go1.<font color="#000000">22.9</font> (Red Hat <font color="#000000">1.22</font>.<font color="#000000">9</font>-<font color="#000000">2</font>.el9_5) linux/amd<font color="#000000">64</font>
[root@r0 sillybench]<i><font color="silver"># go test -bench=.</font></i>
goos: linux
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1-<font color="#000000">4</font>    <font color="#000000">1000000000</font>               <font color="#000000">0.4347</font> ns/op
BenchmarkCPUSilly2-<font color="#000000">4</font>    <font color="#000000">1000000000</font>               <font color="#000000">0.4345</font> ns/op
</pre>
<br />
<span>The Linux benchmark is slightly slower than the FreeBSD one. The Go version is also a bit older. I tried the same with the up-to-date version of Go (1.24.x) with similar results. There could be a slight Bhyve overhead, or FreeBSD is just slightly more efficient in this benchmark. Overall, this shows that Bhyve performs excellently.</span><br />
<br />
<h3 style='display: inline' id='silly-freebsd-vm--bhyve-benchmark'>Silly FreeBSD VM @ Bhyve benchmark</h3><br />
<br />
<span>But as I am curious and don&#39;t want to compare apples with bananas, I decided to install a FreeBSD Bhyve VM to run the same silly benchmark in it. I am not going through the details of how to install a FreeBSD Bhyve VM here; you can easily look it up in the documentation.</span><br />
<br />
<span>But here are the results running the same silly benchmark in a FreeBSD Bhyve VM with the same FreeBSD and Go versions as the host system (I have the VM 4 vCPUs and 14GB of RAM; the benchmark won&#39;t use as many CPUs (and memory) anyway):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@freebsd:~/git/sillybench <i><font color="silver"># go test -bench=.</font></i>
goos: freebsd
goarch: amd64
pkg: codeberg.org/snonux/sillybench
cpu: Intel(R) N100
BenchmarkCPUSilly1      <font color="#000000">1000000000</font>               <font color="#000000">0.4273</font> ns/op
BenchmarkCPUSilly2      <font color="#000000">1000000000</font>               <font color="#000000">0.4286</font> ns/op
PASS
ok      codeberg.org/snonux/sillybench  <font color="#000000">0</font>.949s
</pre>
<br />
<span>It&#39;s a bit better than Linux! I am sure that this is not really a scientific benchmark, so take the results with a grain of salt!</span><br />
<br />
<h2 style='display: inline' id='benchmarking-with-ubench'>Benchmarking with <span class='inlinecode'>ubench</span></h2><br />
<br />
<span>Let&#39;s run another, more sophisticated benchmark using <span class='inlinecode'>ubench</span>, the Unix Benchmark Utility available for FreeBSD. It was installed by simply running <span class='inlinecode'>doas pkg install ubench</span>. It can benchmark CPU and memory performance. Here, we limit it to one CPU for the first run with <span class='inlinecode'>-s</span>, and then let it run at full speed (using all available CPUs in parallel) in the second run.</span><br />
<br />
<h3 style='display: inline' id='freebsd-host-ubench-benchmark'>FreeBSD host <span class='inlinecode'>ubench</span> benchmark</h3><br />
<br />
<span>Single CPU:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas ubench -s <font color="#000000">1</font>
Unix Benchmark Utility v.<font color="#000000">0.3</font>
Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc.
Author: Sergei Viznyuk &lt;sv@phystech.com&gt;
http://www.phystech.com/download/ubench.html
FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64
Ubench Single CPU:   <font color="#000000">671010</font> (<font color="#000000">0</font>.40s)
Ubench Single MEM:  <font color="#000000">1705237</font> (<font color="#000000">0</font>.48s)
-----------------------------------
Ubench Single AVG:  <font color="#000000">1188123</font>

</pre>
<br />
<span>All CPUs (with all Bhyve VMs stopped):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas ubench
Unix Benchmark Utility v.<font color="#000000">0.3</font>
Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc.
Author: Sergei Viznyuk &lt;sv@phystech.com&gt;
http://www.phystech.com/download/ubench.html
FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64
Ubench CPU:  <font color="#000000">2660220</font>
Ubench MEM:  <font color="#000000">3095182</font>
--------------------
Ubench AVG:  <font color="#000000">2877701</font>
</pre>
<br />
<h3 style='display: inline' id='freebsd-vm--bhyve-ubench-benchmark'>FreeBSD VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br />
<br />
<span>Single CPU:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@freebsd:~ <i><font color="silver"># ubench -s 1</font></i>
Unix Benchmark Utility v.<font color="#000000">0.3</font>
Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc.
Author: Sergei Viznyuk &lt;sv@phystech.com&gt;
http://www.phystech.com/download/ubench.html
FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64
Ubench Single CPU:   <font color="#000000">672792</font> (<font color="#000000">0</font>.40s)
Ubench Single MEM:   <font color="#000000">852757</font> (<font color="#000000">0</font>.48s)
-----------------------------------
Ubench Single AVG:   <font color="#000000">762774</font>
</pre>
<br />
<span>Wow, the CPU in the VM was a tiny bit faster than on the host! So this was probably just a glitch in the matrix. Memory seems slower, though.</span><br />
<br />
<span>All CPUs:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@freebsd:~ <i><font color="silver"># ubench</font></i>
Unix Benchmark Utility v.<font color="#000000">0.3</font>
Copyright (C) July, <font color="#000000">1999</font> PhysTech, Inc.
Author: Sergei Viznyuk &lt;sv@phystech.com&gt;
http://www.phystech.com/download/ubench.html
FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> FreeBSD <font color="#000000">14.2</font>-RELEASE-p<font color="#000000">1</font> GENERIC amd64
Ubench CPU:  <font color="#000000">2652857</font>
swap_pager: out of swap space
swp_pager_getswapspace(<font color="#000000">27</font>): failed
swap_pager: out of swap space
swp_pager_getswapspace(<font color="#000000">18</font>): failed
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">43</font> freebsd kernel: pid <font color="#000000">862</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
swp_pager_getswapspace(<font color="#000000">6</font>): failed
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">46</font> freebsd kernel: pid <font color="#000000">863</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">47</font> freebsd kernel: pid <font color="#000000">864</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">48</font> freebsd kernel: pid <font color="#000000">865</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">49</font> freebsd kernel: pid <font color="#000000">861</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
Apr  <font color="#000000">4</font> <font color="#000000">23</font>:<font color="#000000">02</font>:<font color="#000000">51</font> freebsd kernel: pid <font color="#000000">839</font> (ubench), jid <font color="#000000">0</font>, uid <font color="#000000">0</font>, was killed: failed to reclaim memory
</pre>
<br />
<span>The multi-CPU benchmark in the Bhyve VM ran with almost identical results to the FreeBSD host system. However, the memory benchmark failed with out-of-swap space errors. I am unsure why, as the VM has 14GB RAM, but I am not investigating further.</span><br />
<br />
<span>Also, during the benchmark, I noticed the <span class='inlinecode'>bhyve</span> process on the host was constantly using 399% of the CPU (all 4 CPUs).</span><br />
<br />
<pre>
  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
 7449 root         14  20    0    14G    78M kqread   2   2:12 399.81% bhyve
</pre>
<br />
<span>Overall, Bhyve has a small overhead, but the CPU performance difference is negligible. The FreeBSD host is slightly faster than the FreeBSD VM running on Bhyve, but the difference is small enough for our use cases. The memory benchmark seems slightly off, but I&#39;m not sure whether to trust it, especially due to the swap errors. Does <span class='inlinecode'>ubench</span>&#39;s memory benchmark use swap space for the memory test? That wouldn&#39;t make sense and might explain the difference to some degree, though. Do you have any ideas?</span><br />
<br />
<h3 style='display: inline' id='rocky-linux-vm--bhyve-ubench-benchmark'>Rocky Linux VM @ Bhyve <span class='inlinecode'>ubench</span> benchmark</h3><br />
<br />
<span>Unfortunately, I wasn&#39;t able to find <span class='inlinecode'>ubench</span> in any of the Rocky Linux repositories. So, I skipped this test.</span><br />
<br />
<h2 style='display: inline' id='update-improving-disk-io-performance-for-etcd'>Update: Improving Disk I/O Performance for etcd</h2><br />
<br />
<span class='quote'>Updated: Fri 26 Dec 08:51:23 EET 2025</span><br />
<br />
<span>After running k3s for some time, I noticed frequent etcd leader elections and "apply request took too long" warnings in the logs. Investigation revealed that etcd&#39;s sync writes were extremely slow - around 250 kB/s with the default <span class='inlinecode'>virtio-blk</span> disk emulation. etcd requires fast sync writes (ideally under 10ms fsync latency) for stable operation.</span><br />
<br />
<h3 style='display: inline' id='the-problem'>The Problem</h3><br />
<br />
<span>The k3s logs showed etcd struggling with disk I/O:</span><br />
<br />
<pre>
{"level":"warn","msg":"apply request took too long","took":"4.996516657s","expected-duration":"100ms"}
{"level":"warn","msg":"slow fdatasync","took":"1.328469363s","expected-duration":"1s"}
</pre>
<br />
<span>A simple sync write benchmark confirmed the issue:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># dd if=/dev/zero of=/tmp/test bs=4k count=2000 oflag=dsync</font></i>
<font color="#000000">8192000</font> bytes copied, <font color="#000000">31.7058</font> s, <font color="#000000">258</font> kB/s
</pre>
<br />
<h3 style='display: inline' id='the-solution-switch-to-nvme-emulation'>The Solution: Switch to NVMe Emulation</h3><br />
<br />
<span>Bhyve&#39;s NVMe emulation provides significantly better I/O performance than <span class='inlinecode'>virtio-blk</span>.</span><br />
<br />
<h3 style='display: inline' id='step-1-prepare-the-guest-os'>Step 1: Prepare the Guest OS</h3><br />
<br />
<span>Before changing the disk type, the guest needs NVMe drivers in the initramfs and LVM must be configured to scan all devices (not just those recorded during installation):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># cat &gt; /etc/dracut.conf.d/nvme.conf &lt;&lt; EOF</font></i>
add_drivers+=<font color="#808080">" nvme nvme_core "</font>
hostonly=no
EOF

[root@r0 ~]<i><font color="silver"># sed -i 's/# use_devicesfile = 1/use_devicesfile = 0/' /etc/lvm/lvm.conf</font></i>
[root@r0 ~]<i><font color="silver"># dracut -f</font></i>
[root@r0 ~]<i><font color="silver"># shutdown -h now</font></i>
</pre>
<br />
<span>The <span class='inlinecode'>hostonly=no</span> setting ensures the initramfs includes drivers for hardware not currently present. The <span class='inlinecode'>use_devicesfile = 0</span> tells LVM to scan all block devices rather than only those recorded in <span class='inlinecode'>/etc/lvm/devices/system.devices</span> - this is important because the device path changes from <span class='inlinecode'>/dev/vda</span> to <span class='inlinecode'>/dev/nvme0n1</span>.</span><br />
<br />
<h3 style='display: inline' id='step-2-update-the-bhyve-configuration'>Step 2: Update the Bhyve Configuration</h3><br />
<br />
<span>On the FreeBSD host, update the VM configuration to use NVMe:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm stop rocky
paul@f0:~ % doas vm configure rocky
</pre>
<br />
<span>Change <span class='inlinecode'>disk0_type</span> from <span class='inlinecode'>virtio-blk</span> to <span class='inlinecode'>nvme</span>:</span><br />
<br />
<pre>
disk0_type="nvme"
</pre>
<br />
<span>Then start the VM:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas vm start rocky
</pre>
<br />
<h3 style='display: inline' id='benchmark-results'>Benchmark Results</h3><br />
<br />
<span>After switching to NVMe emulation, the sync write performance improved dramatically:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[root@r0 ~]<i><font color="silver"># dd if=/dev/zero of=/tmp/test bs=4k count=2000 oflag=dsync</font></i>
<font color="#000000">8192000</font> bytes copied, <font color="#000000">0.330718</font> s, <font color="#000000">24.8</font> MB/s
</pre>
<br />
<span>That&#39;s approximately **100x faster** than before (24.8 MB/s vs 258 kB/s).</span><br />
<br />
<span>The etcd metrics also showed healthy fsync latencies:</span><br />
<br />
<pre>
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 347
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 396
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 408
</pre>
<br />
<span>Most fsyncs now complete in under 1ms, and there are no more "slow fdatasync" warnings in the logs. The k3s cluster is now stable without spurious leader elections.</span><br />
<br />
<h3 style='display: inline' id='important-notes'>Important Notes</h3><br />
<br />
<ul>
<li>Do NOT use <span class='inlinecode'>disk0_opts="nocache,direct"</span> with NVMe emulation - in my testing this actually made performance worse.</li>
<li>The guest OS must have NVMe drivers in the initramfs before switching, otherwise it won&#39;t boot.</li>
<li>LVM&#39;s devices file feature (enabled by default in RHEL 9 / Rocky Linux 9) must be disabled to allow booting from a different device path.</li>
</ul><br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>Having Linux VMs running inside FreeBSD&#39;s Bhyve is a solid move for future f3s hosting in my home lab. Bhyve provides a reliable way to manage VMs without much hassle. With Linux VMs, I can tap into all the cool stuff (e.g., Kubernetes, eBPF, systemd) in the Linux world while keeping the steady reliability of FreeBSD.</span><br />
<br />
<span>Future uses (out of scope for this blog series) would be additional VMs for different workloads. For example, how about a Windows or NetBSD VM to tinker with?</span><br />
<br />
<span>This flexibility is great for keeping options open and managing different workloads without overcomplicating things. Overall, it&#39;s a nice setup for getting the most out of my hardware and keeping things running smoothly.</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs (You are currently reading this)</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span></span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Sharing on Social Media with Gos v1.0.0</title>
        <link href="https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html" />
        <id>https://foo.zone/gemfeed/2025-03-05-sharing-on-social-media-with-gos.html</id>
        <updated>2025-03-04T21:22:07+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>As you may have noticed, I like to share on Mastodon and LinkedIn all the technical things I find interesting, and this blog post is technically all about that.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='sharing-on-social-media-with-gos-v100'>Sharing on Social Media with Gos v1.0.0</h1><br />
<br />
<span class='quote'>Published at 2025-03-04T21:22:07+02:00</span><br />
<br />
<span>As you may have noticed, I like to share on Mastodon and LinkedIn all the technical things I find interesting, and this blog post is technically all about that.</span><br />
<br />
<a href='./sharing-on-social-media-with-gos/gos.png'><img alt='Gos logo' title='Gos logo' src='./sharing-on-social-media-with-gos/gos.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#sharing-on-social-media-with-gos-v100'>Sharing on Social Media with Gos v1.0.0</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#gos-features'>Gos features</a></li>
<li>⇢ <a href='#installation'>Installation</a></li>
<li>⇢ ⇢ <a href='#prequisites'>Prequisites</a></li>
<li>⇢ ⇢ <a href='#build-and-install'>Build and install</a></li>
<li>⇢ <a href='#configuration'>Configuration</a></li>
<li>⇢ ⇢ <a href='#configuration-fields'>Configuration fields</a></li>
<li>⇢ ⇢ <a href='#automatically-managed-fields'>Automatically managed fields</a></li>
<li>⇢ <a href='#invoking-gos'>Invoking Gos</a></li>
<li>⇢ ⇢ <a href='#common-flags'>Common flags</a></li>
<li>⇢ ⇢ <a href='#examples'>Examples</a></li>
<li>⇢ <a href='#composing-messages-to-be-posted'>Composing messages to be posted</a></li>
<li>⇢ ⇢ <a href='#basic-structure-of-a-message-file'>Basic structure of a message file</a></li>
<li>⇢ ⇢ <a href='#adding-share-tags-in-the-filename'>Adding share tags in the filename</a></li>
<li>⇢ ⇢ <a href='#using-the-prio-tag'>Using the <span class='inlinecode'>prio</span> tag</a></li>
<li>⇢ ⇢ <a href='#more-tags'>More tags</a></li>
<li>⇢ ⇢ <a href='#the-gosc-binary'>The <span class='inlinecode'>gosc</span> binary</a></li>
<li>⇢ <a href='#how-queueing-works-in-gos'>How queueing works in gos</a></li>
<li>⇢ ⇢ <a href='#step-by-step-queueing-process'>Step-by-step queueing process</a></li>
<li>⇢ ⇢ <a href='#how-message-selection-works-in-gos'>How message selection works in gos</a></li>
<li>⇢ <a href='#database-replication'>Database replication</a></li>
<li>⇢ <a href='#post-summary-as-gemini-gemtext'>Post summary as gemini gemtext</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>Gos is a Go-based replacement (which I wrote) for Buffer.com, providing the ability to schedule and manage social media posts from the command line. It can be run, for example, every time you open a new shell or only once every N hours when you open a new shell.</span><br />
<br />
<span>I used Buffer.com to schedule and post my social media messages for a long time. However, over time, there were more problems with that service, including a slow and unintuitive UI, and the free version only allows scheduling up to 10 messages. At one point, they started to integrate an AI assistant (which would seemingly randomly pop up in separate JavaScript-powered input boxes), and then I had enough and decided I had to build my own social sharing tool—and Gos was born.</span><br />
<br />
<a class='textlink' href='https://buffer.com'>https://buffer.com</a><br />
<a class='textlink' href='https://codeberg.org/snonux/gos'>https://codeberg.org/snonux/gos</a><br />
<br />
<h2 style='display: inline' id='gos-features'>Gos features</h2><br />
<br />
<ul>
<li>Mastodon and LinkedIn support.</li>
<li>Dry run mode for testing posts without actually publishing.</li>
<li>Configurable via flags and environment variables.</li>
<li>Easy to integrate into automated workflows.</li>
<li>OAuth2 authentication for LinkedIn.</li>
<li>Image previews for LinkedIn posts.</li>
</ul><br />
<h2 style='display: inline' id='installation'>Installation</h2><br />
<br />
<h3 style='display: inline' id='prequisites'>Prequisites</h3><br />
<br />
<span>The prerequisites are:</span><br />
<br />
<ul>
<li>Go (version 1.24 or later)</li>
<li>Supported browsers like Firefox, Chrome, etc for oauth2.</li>
</ul><br />
<h3 style='display: inline' id='build-and-install'>Build and install</h3><br />
<br />
<span>Clone the repository:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>git clone https://codeberg.org/snonux/gos.git
cd gos
</pre>
<br />
<span>Build the binaries:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>go build -o gos ./cmd/gos
go build -o gosc ./cmd/gosc
sudo mv gos ~/go/bin
sudo mv gosc ~/go/bin
</pre>
<br />
<span>Or, if you want to use the <span class='inlinecode'>Taskfile</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>go-task install
</pre>
<br />
<h2 style='display: inline' id='configuration'>Configuration</h2><br />
<br />
<span>Gos requires a configuration file to store API secrets and OAuth2 credentials for each supported social media platform. The configuration is managed using a Secrets structure, which is stored as a JSON file in <span class='inlinecode'>~/.config/gos/gos.json</span>.</span><br />
<br />
<span>Example Configuration File (<span class='inlinecode'>~/.config/gos/gos.json</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>{
  "MastodonURL": "<font color="#808080">https://mastodon.example.com</font>",
  "MastodonAccessToken": "<font color="#808080">your-mastodon-access-token</font>",
  "LinkedInClientID": "<font color="#808080">your-linkedin-client-id</font>",
  "LinkedInSecret": "<font color="#808080">your-linkedin-client-secret</font>",
  "LinkedInRedirectURL": "<font color="#808080">http://localhost:8080/callback</font>",
}
</pre>
<br />
<h3 style='display: inline' id='configuration-fields'>Configuration fields</h3><br />
<br />
<ul>
<li><span class='inlinecode'>MastodonURL</span>: The base URL of the Mastodon instance you are using (e.g., https://mastodon.social).</li>
<li><span class='inlinecode'>MastodonAccessToken</span>: Your access token for the Mastodon API, which is used to authenticate your posts.</li>
<li><span class='inlinecode'>LinkedInClientID</span>: The client ID for your LinkedIn app, which is needed for OAuth2 authentication.</li>
<li><span class='inlinecode'>LinkedInSecret</span>: The client secret for your LinkedIn app.</li>
<li><span class='inlinecode'>LinkedInRedirectURL</span>: The redirect URL configured for handling OAuth2 responses.</li>
<li><span class='inlinecode'>LinkedInAccessToken</span>: Gos will automatically update this after successful OAuth2 authentication with LinkedIn.</li>
<li><span class='inlinecode'>LinkedInPersonID</span>: Gos will automatically update this after successful OAuth2 authentication with LinkedIn.</li>
</ul><br />
<h3 style='display: inline' id='automatically-managed-fields'>Automatically managed fields</h3><br />
<br />
<span>Once you finish the OAuth2 setup (after the initial run of <span class='inlinecode'>gos</span>), some fields—like <span class='inlinecode'>LinkedInAccessToken</span> and <span class='inlinecode'>LinkedInPersonID</span> will get filled in automatically. To check if everything&#39;s working without actually posting anything, you can run the app in dry run mode with the <span class='inlinecode'>--dry</span> option. After OAuth2 is successful, the file will be updated with <span class='inlinecode'>LinkedInClientID</span> and <span class='inlinecode'>LinkedInAccessToken</span>. If the access token expires, it will go through the OAuth2 process again.</span><br />
<br />
<h2 style='display: inline' id='invoking-gos'>Invoking Gos</h2><br />
<br />
<span>Gos is a command-line tool for posting updates to multiple social media platforms. You can run it with various flags to customize its behaviour, such as posting in dry run mode, limiting posts by size, or targeting specific platforms.</span><br />
<br />
<span>Flags control the tool&#39;s behavior. Below are several common ways to invoke Gos and descriptions of the available flags.</span><br />
<br />
<h3 style='display: inline' id='common-flags'>Common flags</h3><br />
<br />
<ul>
<li><span class='inlinecode'>-dry</span>: Run the application in dry run mode, simulating operations without making any changes.</li>
<li><span class='inlinecode'>-version</span>: Display the current version of the application.</li>
<li><span class='inlinecode'>-compose</span>: Compose a new entry. Default is set by <span class='inlinecode'>composeEntryDefault</span>.</li>
<li><span class='inlinecode'>-gosDir</span>: Specify the directory for Gos&#39; queue and database files. The default is <span class='inlinecode'>~/.gosdir</span>.</li>
<li><span class='inlinecode'>—cacheDir</span>: Specify the directory for Gos&#39; cache. The default is based on the <span class='inlinecode'>gosDir</span> path.</li>
<li><span class='inlinecode'>-browser</span>: Choose the browser for OAuth2 processes. The default is "firefox".</li>
<li><span class='inlinecode'>-configPath</span>: Path to the configuration file. Default is <span class='inlinecode'>~/.config/gos/gos.json</span>.</li>
<li><span class='inlinecode'>—platforms</span>: The enabled platforms and their post size limits. The default is "Mastodon:500,LinkedIn:1000."</li>
<li><span class='inlinecode'>-target</span>: Target number of posts per week. The default is 2.</li>
<li><span class='inlinecode'>-minQueued</span>: Minimum number of queued items before a warning message is printed. The default is 4.</li>
<li><span class='inlinecode'>-maxDaysQueued</span>: Maximum number of days&#39; worth of queued posts before the target increases and pauseDays decreases. The default is 365.</li>
<li><span class='inlinecode'>-pauseDays</span>: Number of days until the next post can be submitted. The default is 3.</li>
<li><span class='inlinecode'>-runInterval</span>: Number of hours until the next post run. The default is 12.</li>
<li><span class='inlinecode'>—lookback</span>: The number of days to look back in time to review posting history. The default is 30.</li>
<li><span class='inlinecode'>-geminiSummaryFor</span>: Generate a Gemini Gemtext format summary specifying months as a comma-separated string.</li>
<li><span class='inlinecode'>-geminiCapsules</span>: Comma-separated list of Gemini capsules. Used to detect Gemtext links.</li>
<li><span class='inlinecode'>-gemtexterEnable</span>: Add special tags for Gemtexter, the static site generator, to the Gemini Gemtext summary.</li>
<li><span class='inlinecode'>-dev</span>: For internal development purposes only.</li>
</ul><br />
<h3 style='display: inline' id='examples'>Examples</h3><br />
<br />
<span>*Dry run mode*</span><br />
<br />
<span>Dry run mode lets you simulate the entire posting process without actually sending the posts. This is useful for testing configurations or seeing what would happen before making real posts.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>./gos --dry
</pre>
<br />
<span>*Normal run*</span><br />
<br />
<span>Sharing to all platforms is as simple as the following (assuming it is configured correctly):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>./gos 
</pre>
<br />
<span>:-)</span><br />
<br />
<a href='./sharing-on-social-media-with-gos/gos-screenshot.png'><img alt='Gos Screenshot' title='Gos Screenshot' src='./sharing-on-social-media-with-gos/gos-screenshot.png' /></a><br />
<br />
<span>However, you will notice that no messages are queued to be posted yet (not like on the screenshot yet!). Relax and read on...</span><br />
<br />
<h2 style='display: inline' id='composing-messages-to-be-posted'>Composing messages to be posted</h2><br />
<br />
<span>To post messages using Gos, you need to create text files containing the posts&#39; content. These files are placed inside the directory specified by the <span class='inlinecode'>--gosDir</span> flag (the default directory is <span class='inlinecode'>~/.gosdir</span>). Each text file represents a single post and must have the .txt extension. You can also simply run <span class='inlinecode'>gos --compose</span> to compose a new entry. It will open simply a new text file in <span class='inlinecode'>gosDir</span>.</span><br />
<br />
<h3 style='display: inline' id='basic-structure-of-a-message-file'>Basic structure of a message file</h3><br />
<br />
<span>Each text file should contain the message you want to post on the specified platforms. That&#39;s it. Example of a Basic Post File <span class='inlinecode'>~/.gosdir/samplepost.txt</span>:</span><br />
<br />
<pre>
This is a sample message to be posted on social media platforms.

Maybe add a link here: https://foo.zone

#foo #cool #gos #golang
</pre>
<br />
<span>The message is just arbitrary text, and, besides inline share tags (see later in this document) at the beginning, Gos does not parse any of the content other than ensuring the overall allowed size for the social media platform isn&#39;t exceeded. If it exceeds the limit, Gos will prompt you to edit the post using your standard text editor (as specified by the <span class='inlinecode'>EDITOR</span> environment variable). When posting, all the hyperlinks, hashtags, etc., are interpreted by the social platforms themselves (e.g., Mastodon, LinkedIn).</span><br />
<br />
<h3 style='display: inline' id='adding-share-tags-in-the-filename'>Adding share tags in the filename</h3><br />
<br />
<span>You can control which platforms a post is shared to, and manage other behaviors using tags embedded in the filename. Add tags in the format <span class='inlinecode'>share:platform1.-platform2</span> to target specific platforms within the filename. This instructs Gos to share the message only to <span class='inlinecode'>platform1</span> (e.g., Mastodon) and explicitly exclude <span class='inlinecode'>platform2</span> (e.g., LinkedIn). You can include multiple platforms by listing them after <span class='inlinecode'>share:</span>, separated by a <span class='inlinecode'>.</span>. Use the <span class='inlinecode'>-</span> symbol to exclude a platform.</span><br />
<br />
<span>Currently, only <span class='inlinecode'>linkedin</span> and <span class='inlinecode'>mastodon</span> are supported, and the shortcuts <span class='inlinecode'>li</span> and <span class='inlinecode'>ma</span> also work.</span><br />
<br />
<span>**Examples:**</span><br />
<br />
<ul>
<li>To share only on Mastodon: <span class='inlinecode'>~/.gosdir/foopost.share:mastodon.txt</span></li>
<li>To exclude sharing on LinkedIn: <span class='inlinecode'>~/.gosdir/foopost.share:-linkedin.txt</span></li>
<li>To explicitly share on both LinkedIn and Mastodon: <span class='inlinecode'>~/.gosdir/foopost.share:linkedin:mastodon.txt</span></li>
<li>To explicitly share only on LinkedIn and exclude Mastodon: <span class='inlinecode'>~/.gosdir/foopost.share:linkedin:-mastodon.txt</span></li>
</ul><br />
<span>Besides encoding share tags in the filename, they can also be embedded within the <span class='inlinecode'>.txt</span> file content to be queued. For example, a file named <span class='inlinecode'>~/.gosdir/foopost.txt</span> with the following content:</span><br />
<br />
<pre>
share:mastodon The content of the post here
</pre>
<br />
<span>or</span><br />
<br />
<pre>
share:mastodon

The content of the post is here https://some.foo/link

#some #hashtags
</pre>
<br />
<span>Gos will parse this content, extract the tags, and queue it as <span class='inlinecode'>~/.gosdir/db/platforms/mastodon/foopost.share:mastodon.extracted.txt....</span> (see how post queueing works later in this document).</span><br />
<br />
<h3 style='display: inline' id='using-the-prio-tag'>Using the <span class='inlinecode'>prio</span> tag</h3><br />
<br />
<span>Gos randomly picks any queued message without any specific order or priority. However, you can assign a higher priority to a message. The priority determines the order in which posts are processed, with messages without a priority tag being posted last and those with priority tags being posted first. If multiple messages have the priority tag, then a random message will be selected from them.</span><br />
<br />
<span>*Examples using the Priority tag:* </span><br />
<br />
<ul>
<li>To share only on Mastodon: <span class='inlinecode'>~/.gosdir/foopost.prio.share:mastodon.txt</span></li>
<li>To not share on LinkedIn: <span class='inlinecode'>~/.gosdir/foopost.prio.share:-linkedin.txt</span></li>
<li>To explicitly share on both: <span class='inlinecode'>~/.gosdir/foopost.prio.share:linkedin:mastodon.txt</span></li>
<li>To explicitly share on only linkedin: <span class='inlinecode'>~/.gosdir/foopost.prio.share:linkedin:-mastodon.txt</span></li>
</ul><br />
<span>There is more: you can also use the <span class='inlinecode'>soon</span> tag. It is almost the same as the <span class='inlinecode'>prio</span> tag, just with one lower priority.</span><br />
<br />
<h3 style='display: inline' id='more-tags'>More tags</h3><br />
<br />
<ul>
<li>A <span class='inlinecode'>.ask.</span> in the filename will prompt you to choose whether to queue, edit, or delete a file before queuing it.</li>
<li>A <span class='inlinecode'>.now.</span> in the filename will schedule a post immediately, regardless of the target status.</li>
</ul><br />
<span>So you could also have filenames like those: </span><br />
<br />
<ul>
<li><span class='inlinecode'>~/.gosdir/foopost.ask.txt</span></li>
<li><span class='inlinecode'>~/.gosdir/foopost.now.txt</span></li>
<li><span class='inlinecode'>~/.gosdir/foopost.ask.share:mastodon.txt</span></li>
<li><span class='inlinecode'>~/.gosdir/foopost.ask.prio.share:mastodon.txt</span></li>
<li><span class='inlinecode'>~/.gosdir/foopost.ask.now.share:-mastodon.txt</span></li>
<li><span class='inlinecode'>~/.gosdir/foopost.now.share:-linkedin.txt</span></li>
</ul><br />
<span>etc...</span><br />
<br />
<span>All of the above also works with embedded tags. E.g.:</span><br />
<br />
<pre>
share:mastodon,ask,prio Hello wold :-)
</pre>
<br />
<span>or </span><br />
<br />
<pre>
share:mastodon,ask,prio

Hello World :-)
</pre>
<br />
<h3 style='display: inline' id='the-gosc-binary'>The <span class='inlinecode'>gosc</span> binary</h3><br />
<br />
<span><span class='inlinecode'>gosc</span> stands for Gos Composer and will simply launch your <span class='inlinecode'>$EDITOR</span> on a new text file in the <span class='inlinecode'>gosDir</span>. It&#39;s the same as running <span class='inlinecode'>gos --compose</span>, really. It is a quick way of composing new posts. Once composed, it will ask for your confirmation on whether the message should be queued or not.</span><br />
<br />
<h2 style='display: inline' id='how-queueing-works-in-gos'>How queueing works in gos</h2><br />
<br />
<span>When you place a message file in the <span class='inlinecode'>gosDir</span>, Gos processes it by moving the message through a queueing system before posting it to the target social media platforms. A message&#39;s lifecycle includes several key stages, from creation to posting, all managed through the <span class='inlinecode'>./db/platforms/PLATFORM</span> directories.</span><br />
<br />
<h3 style='display: inline' id='step-by-step-queueing-process'>Step-by-step queueing process</h3><br />
<br />
<span>1. Inserting a Message into <span class='inlinecode'>gosDir</span>: You start by creating a text file that represents your post (e.g., <span class='inlinecode'>foo.txt</span>) and placing it in the <span class='inlinecode'>gosDir</span>. When Gos runs, this file is processed. The easiest way is to use <span class='inlinecode'>gosc</span> here.</span><br />
<br />
<span>2. Moving to the Queue: Upon running Gos, the tool identifies the message in the <span class='inlinecode'>gosDir</span> and places it into the queue for the specified platform. The message is moved into the appropriate directory for each platform in <span class='inlinecode'>./db/platforms/PLATFORM</span>. During this stage, the message file is renamed to include a timestamp indicating when it was queued and given a <span class='inlinecode'>.queued</span> extension.</span><br />
<br />
<span>*Example: If a message is queued for LinkedIn, the filename might look like this:*</span><br />
<br />
<pre>
~/.gosdir/db/platforms/linkedin/foo.share:-mastodon.txt.20241022-102343.queued
</pre>
<br />
<span>3. Posting the Message: Once a message is placed in the queue, Gos posts it to the specified social media platforms. </span><br />
<br />
<span>4. Renaming to <span class='inlinecode'>.posted</span>: After a message is successfully posted to a platform, the corresponding <span class='inlinecode'>.queued</span> file is renamed to have a <span class='inlinecode'>.posted</span> extension, and the filename timestamp is also updated. This signals that the post has been processed and published.</span><br />
<br />
<span>*Example - After a successful post to LinkedIn, the message file might look like this:*</span><br />
<br />
<pre>
./db/platforms/linkedin/foo.share:-mastodon.txt.20241112-121323.posted
</pre>
<br />
<h3 style='display: inline' id='how-message-selection-works-in-gos'>How message selection works in gos</h3><br />
<br />
<span>Gos decides which messages to post using a combination of priority, platform-specific tags, and timing rules. The message selection process ensures that messages are posted according to your configured cadence and targets while respecting pauses between posts and previously met goals.</span><br />
<br />
<span>The key factors in message selection are:</span><br />
<br />
<ul>
<li>Target Number of Posts Per Week: The <span class='inlinecode'>-target</span> flag defines how many posts per week should be made to a specific platform. This target helps Gos manage the posting rate, ensuring that the right number of posts are made without exceeding the desired frequency. </li>
<li>Post History Lookback: The <span class='inlinecode'>-lookback</span> flag tells Gos how many days back to look in the post history to calculate whether the weekly post target has already been met. It ensures that previously posted content is considered before deciding to queue up another message.</li>
<li>Message Priority: Messages with no priority value are processed after those with priority. If two messages have the same priority, one is selected randomly.</li>
<li>Pause Between Posts: The <span class='inlinecode'>-pauseDays</span> flag allows you to specify a minimum number of days to wait between posts for the same platform. This prevents oversaturation of content and ensures that posts are spread out over time.</li>
</ul><br />
<h2 style='display: inline' id='database-replication'>Database replication</h2><br />
<br />
<span>I simply use Syncthing to backup/sync my <span class='inlinecode'>gosDir</span>. Note, that I run Gos on my personal laptop. No need to run it from a server.</span><br />
<br />
<a class='textlink' href='https://syncthing.net'>https://syncthing.net</a><br />
<br />
<h2 style='display: inline' id='post-summary-as-gemini-gemtext'>Post summary as gemini gemtext</h2><br />
<br />
<span>For my blog, I want to post a summary of all the social messages posted over the last couple of months. For an example, have a look here:</span><br />
<br />
<a class='textlink' href='./2025-01-01-posts-from-october-to-december-2024.html'>./2025-01-01-posts-from-october-to-december-2024.html</a><br />
<br />
<span>To accomplish this, run:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>gos --geminiSummaryFor <font color="#000000">202410</font>,<font color="#000000">202411</font>,<font color="#000000">202412</font>
</pre>
<br />
<span>This outputs the summary for the three specified months, as shown in the example. The summary includes posts from all social media networks but removes duplicates.</span><br />
<br />
<span>Also, add the <span class='inlinecode'>--gemtexterEnable</span> flag, if you are using Gemtexter:</span><br />
<br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>gos --gemtexterEnable --geminiSummaryFor <font color="#000000">202410</font>,<font color="#000000">202411</font>,<font color="#000000">202412</font>
</pre>
<br />
<a class='textlink' href='https://codeberg.org/snonux/gemtexter'>Gemtexter</a><br />
<br />
<span>In case there are HTTP links that translate directly to the Geminispace for certain capsules, specify the Gemini capsules as a comma-separated list as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>gos --gemtexterEnable --geminiSummaryFor <font color="#000000">202410</font>,<font color="#000000">202411</font>,<font color="#000000">202412</font> --geminiCapsules <font color="#808080">"foo.zone,paul.buetow.org"</font>
</pre>
<br />
<span>It will then also generate Gemini Gemtext links in the summary page and flag them with <span class='inlinecode'>(Gemini)</span>.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>Overall, this was a fun little Go project with practical use for me personally. I hope you also had fun reading this, and maybe you will use it as well.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Random Weird Things - Part Ⅱ</title>
        <link href="https://foo.zone/gemfeed/2025-02-08-random-weird-things-ii.html" />
        <id>https://foo.zone/gemfeed/2025-02-08-random-weird-things-ii.html</id>
        <updated>2025-02-08T11:06:16+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. This is the second run.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='random-weird-things---part-'>Random Weird Things - Part Ⅱ</h1><br />
<br />
<span class='quote'>Published at 2025-02-08T11:06:16+02:00</span><br />
<br />
<span>Every so often, I come across random, weird, and unexpected things on the internet. I thought it would be neat to share them here from time to time. This is the second run.</span><br />
<br />
<a class='textlink' href='./2024-07-05-random-weird-things.html'>2024-07-05 Random Weird Things - Part Ⅰ</a><br />
<a class='textlink' href='./2025-02-08-random-weird-things-ii.html'>2025-02-08 Random Weird Things - Part Ⅱ (You are currently reading this)</a><br />
<a class='textlink' href='./2025-08-15-random-weird-things-iii.html'>2025-08-15 Random Weird Things - Part Ⅲ</a><br />
<br />
<pre>
/\_/\           /\_/\
( o.o ) WHOA!! ( o.o )
&gt; ^ &lt;           &gt; ^ &lt;
/   \    MOEEW! /   \
/______\       /______\
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#random-weird-things---part-'>Random Weird Things - Part Ⅱ</a></li>
<li>⇢ <a href='#11-the-sqlite-codebase-is-a-gem'>11. The SQLite codebase is a gem</a></li>
<li>⇢ <a href='#go-programming'>Go Programming</a></li>
<li>⇢ ⇢ <a href='#12-official-go-font'>12. Official Go font</a></li>
<li>⇢ ⇢ <a href='#13-go-functions-can-have-methods'>13. Go functions can have methods</a></li>
<li>⇢ <a href='#macos'>macOS</a></li>
<li>⇢ ⇢ <a href='#14--and-ss-are-treated-the-same'>14. ß and ss are treated the same</a></li>
<li>⇢ ⇢ <a href='#15-colon-as-file-path-separator'>15. Colon as file path separator</a></li>
<li>⇢ <a href='#16-polyglots---programs-written-in-multiple-languages'>16. Polyglots - programs written in multiple languages</a></li>
<li>⇢ <a href='#17-languages-where-indices-start-at-1'>17. Languages, where indices start at 1</a></li>
<li>⇢ <a href='#18-perl-poetry'>18. Perl Poetry</a></li>
<li>⇢ <a href='#19-css3-is-turing-complete'>19. CSS3 is turing complete</a></li>
<li>⇢ <a href='#20-the-biggest-shell-programs-'>20. The biggest shell programs </a></li>
</ul><br />
<h2 style='display: inline' id='11-the-sqlite-codebase-is-a-gem'>11. The SQLite codebase is a gem</h2><br />
<br />
<span>Check this out:</span><br />
<br />
<a href='./random-weird-things-ii/sqlite-gem.png'><img alt='SQLite Gem' title='SQLite Gem' src='./random-weird-things-ii/sqlite-gem.png' /></a><br />
<br />
<span>Source:</span><br />
<br />
<a class='textlink' href='https://wetdry.world/@memes/112717700557038278'>https://wetdry.world/@memes/112717700557038278</a><br />
<br />
<h2 style='display: inline' id='go-programming'>Go Programming</h2><br />
<br />
<h3 style='display: inline' id='12-official-go-font'>12. Official Go font</h3><br />
<br />
<span>The Go programming language has its own official font, called "Go Font." There&#39;s a monospace version for code and a proportional one for regular text.</span><br />
<br />
<span>Check out some Go code displayed using the Go font:</span><br />
<br />
<a href='./random-weird-things-ii/go-font-code.png'><img alt='Go font code' title='Go font code' src='./random-weird-things-ii/go-font-code.png' /></a><br />
<br />
<a class='textlink' href='https://go.dev/blog/go-fonts'>https://go.dev/blog/go-fonts</a><br />
<br />
<span>I found it interesting and/or weird, as Go is a programming language. Why should it bother having its own font? I have never seen another open-source project like Go do this. But I also like it. Maybe I will use it in the future for this blog :-) </span><br />
<br />
<h3 style='display: inline' id='13-go-functions-can-have-methods'>13. Go functions can have methods</h3><br />
<br />
<span>Functions on struct types? Well known. Functions on types like <span class='inlinecode'>int</span> and <span class='inlinecode'>string</span>? It&#39;s also known of, but a bit lesser. Functions on function types? That sounds a bit funky, but it&#39;s possible, too! For demonstration, have a look at this snippet:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">package</font></u></b> main

<b><u><font color="#000000">import</font></u></b> <font color="#808080">"log"</font>

<b><u><font color="#000000">type</font></u></b> fun <b><u><font color="#000000">func</font></u></b>() <b><font color="#000000">string</font></b>

<b><u><font color="#000000">func</font></u></b> (f fun) Bar() <b><font color="#000000">string</font></b> {
        <b><u><font color="#000000">return</font></u></b> <font color="#808080">"Bar"</font>
}

<b><u><font color="#000000">func</font></u></b> main() {
        <b><u><font color="#000000">var</font></u></b> f fun = <b><u><font color="#000000">func</font></u></b>() <b><font color="#000000">string</font></b> {
                <b><u><font color="#000000">return</font></u></b> <font color="#808080">"Foo"</font>
        }
        log.Println(<font color="#808080">"Example 1: "</font>, f())
        log.Println(<font color="#808080">"Example 2: "</font>, f.Bar())
        log.Println(<font color="#808080">"Example 3: "</font>, fun(f.Bar).Bar())
        log.Println(<font color="#808080">"Example 4: "</font>, fun(fun(f.Bar).Bar).Bar())
}
</pre>
<br />
<span>It runs just fine:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>❯ go run main.go
<font color="#000000">2025</font>/<font color="#000000">02</font>/<font color="#000000">07</font> <font color="#000000">22</font>:<font color="#000000">56</font>:<font color="#000000">14</font> Example <font color="#000000">1</font>:  Foo
<font color="#000000">2025</font>/<font color="#000000">02</font>/<font color="#000000">07</font> <font color="#000000">22</font>:<font color="#000000">56</font>:<font color="#000000">14</font> Example <font color="#000000">2</font>:  Bar
<font color="#000000">2025</font>/<font color="#000000">02</font>/<font color="#000000">07</font> <font color="#000000">22</font>:<font color="#000000">56</font>:<font color="#000000">14</font> Example <font color="#000000">3</font>:  Bar
<font color="#000000">2025</font>/<font color="#000000">02</font>/<font color="#000000">07</font> <font color="#000000">22</font>:<font color="#000000">56</font>:<font color="#000000">14</font> Example <font color="#000000">4</font>:  Bar
</pre>
<br />
<h2 style='display: inline' id='macos'>macOS</h2><br />
<br />
<span>For personal computing, I don&#39;t use Apple, but I have to use it for work. </span><br />
<br />
<h3 style='display: inline' id='14--and-ss-are-treated-the-same'>14. ß and ss are treated the same</h3><br />
<br />
<span>Know German? In German, the letter "sharp s" is written as ß. ß is treated the same as ss on macOS.</span><br />
<br />
<span>On a case-insensitive file system like macOS, not only are uppercase and lowercase letters treated the same, but non-Latin characters like the German "ß" are also considered equivalent to their Latin counterparts (in this case, "ss").</span><br />
<br />
<span>So, even though "Maß" and "Mass" are not strictly equivalent, the macOS file system still treats them as the same filename due to its handling of Unicode characters. This can sometimes lead to unexpected behaviour. Check this out:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>❯ touch Maß
❯ ls -l
-rw-r--r--@ <font color="#000000">1</font> paul  wheel  <font color="#000000">0</font> Feb  <font color="#000000">7</font> <font color="#000000">23</font>:<font color="#000000">02</font> Maß
❯ touch Mass
❯ ls -l
-rw-r--r--@ <font color="#000000">1</font> paul  wheel  <font color="#000000">0</font> Feb  <font color="#000000">7</font> <font color="#000000">23</font>:<font color="#000000">02</font> Maß
❯ rm Mass
❯ ls -l

❯ touch Mass
❯ ls -ltr
-rw-r--r--@ <font color="#000000">1</font> paul  wheel  <font color="#000000">0</font> Feb  <font color="#000000">7</font> <font color="#000000">23</font>:<font color="#000000">02</font> Mass
❯ rm Maß
❯ ls -l

</pre>
<br />
<h3 style='display: inline' id='15-colon-as-file-path-separator'>15. Colon as file path separator</h3><br />
<br />
<span>MacOS can use the colon as a file path separator on its ADFS (file system). A typical ADFS file pathname on a hard disc might be:</span><br />
<br />
<pre>
ADFS::4.$.Documents.Techwriter.Myfile
</pre>
<br />
<span>I can&#39;t reproduce this on my (work) Mac, though, as it now uses the APFS file system. In essence, ADFS is an older file system, while APFS is a contemporary file system optimized for Apple&#39;s modern devices.</span><br />
<br />
<a class='textlink' href='https://social.jvns.ca/@b0rk/113041293527832730'>https://social.jvns.ca/@b0rk/113041293527832730</a><br />
<br />
<h2 style='display: inline' id='16-polyglots---programs-written-in-multiple-languages'>16. Polyglots - programs written in multiple languages</h2><br />
<br />
<span>A coding polyglot is a program that runs in multiple programming languages without any changes. People usually write them as a fun challenge — you exploit syntax overlaps between languages to make the same file valid (and meaningful) in each one.</span><br />
<br />
<span>Check out my very own polyglot:</span><br />
<br />
<a class='textlink' href='./2014-03-24-the-fibonacci.pl.c-polyglot.html'>The <span class='inlinecode'>fibonatti.pl.c</span> Polyglot</a><br />
<br />
<h2 style='display: inline' id='17-languages-where-indices-start-at-1'>17. Languages, where indices start at 1</h2><br />
<br />
<span>Array indices start at 1 instead of 0 in some programming languages, known as one-based indexing. This can be controversial because zero-based indexing is more common in popular languages like C, C++, Java, and Python. One-based indexing can lead to off-by-one errors when developers switch between languages with different indexing schemes.</span><br />
<br />
<span>Languages with One-Based Indexing:</span><br />
<br />
<ul>
<li>Fortran</li>
<li>MATLAB</li>
<li>Lua</li>
<li>R (for vectors and lists)</li>
<li>Smalltalk</li>
<li>Julia (by default, although zero-based indexing is also possible)</li>
</ul><br />
<span><span class='inlinecode'>foo.lua</span> example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>arr = {<font color="#000000">10</font>, <font color="#000000">20</font>, <font color="#000000">30</font>, <font color="#000000">40</font>, <font color="#000000">50</font>}
print(arr[<font color="#000000">1</font>]) <i><font color="silver">-- Accessing the first element</font></i>
</pre>
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>❯ lua foo.lua
<font color="#000000">10</font>
</pre>
<br />
<span>One-based indexing is more natural for human-readable, mathematical, and theoretical contexts, where counting traditionally starts from one.</span><br />
<br />
<h2 style='display: inline' id='18-perl-poetry'>18. Perl Poetry</h2><br />
<br />
<span>Perl Poetry is a playful and creative practice within the programming community where Perl code is written as a poem. These poems are crafted to be syntactically valid Perl code and make sense as poetic text, often with whimsical or humorous intent. This showcases Perl&#39;s flexibility and expressiveness, as well as the creativity of its programmers.</span><br />
<br />
<span>See this Poetry of my own; the Perl interpreter does not yield any syntax error parsing that. But also, the Peom doesn&#39;t do anything useful then executed:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver"># (C) 2006 by Paul C. Buetow</font></i>

Christmas:{time;<i><font color="silver">#!!!</font></i>

Children: <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">tell</font></u></b> $wishes;

Santa: <b><u><font color="#000000">for</font></u></b> $each (@children) { 
BEGIN { <b><u><font color="#000000">read</font></u></b> $each, $their, wishes <b><u><font color="#000000">and</font></u></b> study them; <b><u><font color="#000000">use</font></u></b> Memoize<i><font color="silver">#ing</font></i>

} <b><u><font color="#000000">use</font></u></b> constant gift, <font color="#808080">'wrapping'</font>; 
<b><u><font color="#000000">package</font></u></b> Gifts; <b><u><font color="#000000">pack</font></u></b> $each, gift <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">bless</font></u></b> $each <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">goto</font></u></b> deliver
or <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">import</font></u></b> <b><u><font color="#000000">if</font></u></b> not <b><u><font color="#000000">local</font></u></b> $available,!!! HO, HO, HO;

<b><u><font color="#000000">redo</font></u></b> Santa, <b><u><font color="#000000">pipe</font></u></b> $gifts, to_childs;
<b><u><font color="#000000">redo</font></u></b> Santa <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">return</font></u></b> <b><u><font color="#000000">if</font></u></b> <b><u><font color="#000000">last</font></u></b> one, is, delivered; 

deliver: gift <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">require</font></u></b> diagnostics <b><u><font color="#000000">if</font></u></b> <b><u><font color="#000000">our</font></u></b> $gifts ,not break;
<b><u><font color="#000000">do</font></u></b>{ <b><u><font color="#000000">use</font></u></b> NEXT; time; <b><u><font color="#000000">tied</font></u></b> $gifts} <b><u><font color="#000000">if</font></u></b> broken <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">dump</font></u></b> the, broken, ones;
The_children: <b><u><font color="#000000">sleep</font></u></b> <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">wait</font></u></b> <b><u><font color="#000000">for</font></u></b> (<b><u><font color="#000000">each</font></u></b> %gift) <b><u><font color="#000000">and</font></u></b> try { to =&gt; <b><u><font color="#000000">untie</font></u></b> $gifts };

<b><u><font color="#000000">redo</font></u></b> Santa, <b><u><font color="#000000">pipe</font></u></b> $gifts, to_childs;
<b><u><font color="#000000">redo</font></u></b> Santa <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">return</font></u></b> <b><u><font color="#000000">if</font></u></b> <b><u><font color="#000000">last</font></u></b> one, is, delivered; 

The_christmas_tree: formline <b><u><font color="#000000">s</font></u></b><font color="#808080">/ /childrens/</font>, $gifts;
<b><u><font color="#000000">alarm</font></u></b> <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">warn</font></u></b> <b><u><font color="#000000">if</font></u></b> not <b><u><font color="#000000">exists</font></u></b> $Christmas{ tree}, @t, $ENV{HOME};  
<b><u><font color="#000000">write</font></u></b> &lt;&lt;EMail
 to the parents to buy a new christmas tree!!!!<font color="#000000">111</font>
 <b><u><font color="#000000">and</font></u></b> send the
EMail
;<b><u><font color="#000000">wait</font></u></b> <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">redo</font></u></b> deliver until <b><u><font color="#000000">defined</font></u></b> <b><u><font color="#000000">local</font></u></b> $tree;

<b><u><font color="#000000">redo</font></u></b> Santa, <b><u><font color="#000000">pipe</font></u></b> $gifts, to_childs;
<b><u><font color="#000000">redo</font></u></b> Santa <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">return</font></u></b> <b><u><font color="#000000">if</font></u></b> <b><u><font color="#000000">last</font></u></b> one, is, delivered ;}

END {} <b><u><font color="#000000">our</font></u></b> $mission <b><u><font color="#000000">and</font></u></b> <b><u><font color="#000000">do</font></u></b> <b><u><font color="#000000">sleep</font></u></b> until <b><u><font color="#000000">next</font></u></b> Christmas ;}

__END__

This is perl, v5.<font color="#000000">8.8</font> built <b><u><font color="#000000">for</font></u></b> i386-freebsd-64int
</pre>
<br />
<a class='textlink' href='./2008-06-26-perl-poetry.html'>More Perl Poetry of mine</a><br />
<br />
<h2 style='display: inline' id='19-css3-is-turing-complete'>19. CSS3 is turing complete</h2><br />
<br />
<span>Turns out CSS3 is Turing complete — you can simulate a Turing machine using nothing but CSS animations and styles, no JavaScript needed. Keyframe animations can encode state transitions and perform calculations, which is wild considering CSS is supposed to just make things look pretty.</span><br />
<br />
<a class='textlink' href='https://stackoverflow.com/questions/2497146/is-css-turing-complete'>Is CSS turing complete?</a><br />
<br />
<span>Check out this 100% CSS implementation of the Conways Game of Life:</span><br />
<br />
<a href='./random-weird-things-ii/css-conway.png'><img src='./random-weird-things-ii/css-conway.png' /></a><br />
<br />
<a class='textlink' href='https://github.com/propjockey/css-conways-game-of-life'>CSS Conways Game of Life</a><br />
<br />
<span>Conway&#39;s Game of Life is Turing complete because it can simulate a universal Turing machine, meaning it can perform any computation that a computer can, given the right initial conditions and sufficient time and space. Suppose a language can implement Conway&#39;s Game of Life. In that case, it demonstrates the language&#39;s ability to handle complex state transitions and computations. It has the necessary constructs (like iteration, conditionals, and data manipulation) to simulate any algorithm, thus confirming its Turing completeness.</span><br />
<br />
<h2 style='display: inline' id='20-the-biggest-shell-programs-'>20. The biggest shell programs </h2><br />
<br />
<span>One would think that shell scripts are only suitable for small tasks. Well, I must be wrong, as there are huge shell programs out there (up to 87k LOC) which aren&#39;t auto-generated but hand-written!</span><br />
<br />
<a class='textlink' href='https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell-Programs-in-the-World'>The Biggest Sell Programs in the World</a><br />
<br />
<span>My Gemtexter (bash) is only 1329 LOC as of now. So it&#39;s tiny.</span><br />
<br />
<a class='textlink' href='./2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html'>Gemtexter - One Bash script to rule it all</a><br />
<br />
<span>I hope you had some fun. E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</title>
        <link href="https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html" />
        <id>https://foo.zone/gemfeed/2025-02-01-f3s-kubernetes-with-freebsd-part-3.html</id>
        <updated>2025-01-30T09:22:06+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the third blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution we will use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-3-protecting-from-power-cuts'>f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</h1><br />
<br />
<span class='quote'>Published at 2025-01-30T09:22:06+02:00</span><br />
<br />
<span>This is the third blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution we will use on FreeBSD-based physical machines.</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-3-protecting-from-power-cuts'>f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#changes-since-last-time'>Changes since last time</a></li>
<li>⇢ ⇢ <a href='#freebsd-upgrade-from-141-to-142'>FreeBSD upgrade from 14.1 to 14.2</a></li>
<li>⇢ ⇢ <a href='#a-new-home-behind-the-tv'>A new home (behind the TV)</a></li>
<li>⇢ <a href='#the-ups-hardware'>The UPS hardware</a></li>
<li>⇢ <a href='#configuring-freebsd-to-work-with-the-ups'>Configuring FreeBSD to Work with the UPS</a></li>
<li>⇢ ⇢ <a href='#usb-device-detection'>USB Device Detection</a></li>
<li>⇢ ⇢ <a href='#apcupsd-installation'><span class='inlinecode'>apcupsd</span> Installation</a></li>
<li>⇢ ⇢ <a href='#ups-connectivity-test'>UPS Connectivity Test</a></li>
<li>⇢ <a href='#apc-info-on-partner-nodes'>APC Info on Partner Nodes:</a></li>
<li>⇢ ⇢ <a href='#installation-on-partners'>Installation on partners</a></li>
<li>⇢ <a href='#power-outage-simulation'>Power outage simulation</a></li>
<li>⇢ ⇢ <a href='#pulling-the-plug'>Pulling the plug</a></li>
<li>⇢ ⇢ <a href='#restoring-power'>Restoring power</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='introduction'>Introduction</h2><br />
<br />
<span>In this blog post, we are setting up the UPS for the cluster. A UPS, or Uninterruptible Power Supply, safeguards my cluster from unexpected power outages and surges. It acts as a backup battery that kicks in when the electricity cuts out—especially useful in my area, where power cuts are frequent—allowing for a graceful system shutdown and preventing data loss and corruption. This is especially important since I will also store some of my data on the f3s nodes.</span><br />
<br />
<h2 style='display: inline' id='changes-since-last-time'>Changes since last time</h2><br />
<br />
<h3 style='display: inline' id='freebsd-upgrade-from-141-to-142'>FreeBSD upgrade from 14.1 to 14.2</h3><br />
<br />
<span>There has been a new release since the last blog post in this series. The upgrade from 14.1 was as easy as:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0: ~ % doas freebsd-update fetch
paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas freebsd-update -r <font color="#000000">14.2</font>-RELEASE upgrade
paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas shutdown -r now
</pre>
<br />
<span>And after rebooting, I ran:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0: ~ % doas freebsd-update install
paul@f0: ~ % doas pkg update
paul@f0: ~ % doas pkg upgrade
paul@f0: ~ % doas shutdown -r now
</pre>
<br />
<span>And after another reboot, I was on 14.2:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % uname -a
FreeBSD f0.lan.buetow.org <font color="#000000">14.2</font>-RELEASE FreeBSD <font color="#000000">14.2</font>-RELEASE 
 releng/<font color="#000000">14.2</font>-n<font color="#000000">269506</font>-c8918d6c7412 GENERIC amd64
</pre>
<br />
<span>And, of course, I ran this on all 3 nodes!</span><br />
<br />
<h3 style='display: inline' id='a-new-home-behind-the-tv'>A new home (behind the TV)</h3><br />
<br />
<span>I&#39;ve put all the infrastructure behind my TV, as plenty of space is available. The TV hides most of the setup, which drastically improved the SAF (spouse acceptance factor).</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-3/f3s-changes.jpg'><img alt='New hardware placement arrangement' title='New hardware placement arrangement' src='./f3s-kubernetes-with-freebsd-part-3/f3s-changes.jpg' /></a><br />
<br />
<span>I got rid of the mini-switch I mentioned in the previous blog post. I have the TP-Link EAP615-Wall mounted on the wall nearby, which is my OpenWrt-powered Wi-Fi hotspot. It also has 3 Ethernet ports, to which I connected the Beelink nodes. That&#39;s the device you see at the very top.</span><br />
<br />
<span>The Ethernet cables go downward through the cable boxes to the Beelink nodes. In addition to the Beelink f3s nodes, I connected the TP-Link to the UPS as well (not discussed further in this blog post, but the positive side effect is that my Wi-Fi will still work during a power loss for some time—and during a power cut, the Beelink nodes will still be able to communicate with each other).</span><br />
<br />
<span>On the very left (the black box) is the UPS, with four power outlets. Three go to the Beelink nodes, and one goes to the TP-Link. A USB output is also connected to the first Beelink node, <span class='inlinecode'>f0</span>. </span><br />
<br />
<span>On the very right (halfway hidden behind the TV) are the 3 Beelink nodes stacked on top of each other. The only downside (or upside?) is that my 14-month-old daughter is now chaos-testing the Beelink nodes, as the red power buttons (now reachable for her) are very attractive for her to press when passing by randomly. :-) Luckily, that will only cause graceful system shutdowns!</span><br />
<br />
<h2 style='display: inline' id='the-ups-hardware'>The UPS hardware</h2><br />
<br />
<span>I wanted a UPS that I could connect to via FreeBSD, and that would provide enough backup power to operate the cluster for a couple of minutes (it turned out to be around an hour, but this time will likely be shortened after future hardware upgrades, like additional drives and a backup enclosure) and to automatically initiate the shutdown of all the f3s nodes.</span><br />
<br />
<span>I decided on the APC Back-UPS BX750MI model because:</span><br />
<br />
<ul>
<li>Zero noise level when there is no power cut (some light noise when the battery is in operation during a power cut).</li>
<li>Cost: It is relatively affordable (not costing thousands).</li>
<li>USB connectivity: Can be connected via USB to one of the FreeBSD hosts to read the UPS status.</li>
<li>A power output of 750VA (or 410 watts), suitable for an hour of runtime for my f3s nodes (plus the Wi-Fi router).</li>
<li>Multiple power outlets: Can connect all 3 f3s nodes directly.</li>
<li>User-replaceable batteries: I can replace the batteries myself after two years or more (depending on usage).</li>
<li>Its compact design. Overall, I like how it looks.</li>
</ul><br />
<a href='./f3s-kubernetes-with-freebsd-part-3/apc-back-ups.jpg'><img alt='The APC Back-UPS BX750MI in operation.' title='The APC Back-UPS BX750MI in operation.' src='./f3s-kubernetes-with-freebsd-part-3/apc-back-ups.jpg' /></a><br />
<br />
<h2 style='display: inline' id='configuring-freebsd-to-work-with-the-ups'>Configuring FreeBSD to Work with the UPS</h2><br />
<br />
<h3 style='display: inline' id='usb-device-detection'>USB Device Detection</h3><br />
<br />
<span>Once plugged in via USB on FreeBSD, I could see the following in the kernel messages:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0: ~ % doas dmesg | grep UPS
ugen0.<font color="#000000">2</font>: &lt;American Power Conversion Back-UPS BX750MI&gt; at usbus0
</pre>
<br />
<h3 style='display: inline' id='apcupsd-installation'><span class='inlinecode'>apcupsd</span> Installation</h3><br />
<br />
<span>To make use of the USB connection, the <span class='inlinecode'>apcupsd</span> package had to be installed:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0: ~ % doas install apcupsd
</pre>
<br />
<span>I have made the following modifications to the configuration file so that the UPS can be used via the USB interface:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/usr/local/etc/apcupsd % diff -u apcupsd.conf.sample  apcupsd.conf
--- apcupsd.conf.sample <font color="#000000">2024</font>-<font color="#000000">11</font>-<font color="#000000">01</font> <font color="#000000">16</font>:<font color="#000000">40</font>:<font color="#000000">42.000000000</font> +<font color="#000000">0200</font>
+++ apcupsd.conf        <font color="#000000">2024</font>-<font color="#000000">12</font>-<font color="#000000">03</font> <font color="#000000">10</font>:<font color="#000000">58</font>:<font color="#000000">24.009501000</font> +<font color="#000000">0200</font>
@@ -<font color="#000000">31</font>,<font color="#000000">7</font> +<font color="#000000">31</font>,<font color="#000000">7</font> @@
 <i><font color="silver">#     940-1524C, 940-0024G, 940-0095A, 940-0095B,</font></i>
 <i><font color="silver">#     940-0095C, 940-0625A, M-04-02-2000</font></i>
 <i><font color="silver">#</font></i>
-UPSCABLE smart
+UPSCABLE usb

 <i><font color="silver"># To get apcupsd to work, in addition to defining the cable</font></i>
 <i><font color="silver"># above, you must also define a UPSTYPE, which corresponds to</font></i>
@@ -<font color="#000000">88</font>,<font color="#000000">8</font> +<font color="#000000">88</font>,<font color="#000000">10</font> @@
 <i><font color="silver">#                            that apcupsd binds to that particular unit</font></i>
 <i><font color="silver">#                            (helpful if you have more than one USB UPS).</font></i>
 <i><font color="silver">#</font></i>
-UPSTYPE apcsmart
-DEVICE /dev/usv
+UPSTYPE usb
+DEVICE

 <i><font color="silver"># POLLTIME &lt;int&gt;</font></i>
 <i><font color="silver">#   Interval (in seconds) at which apcupsd polls the UPS for status. This</font></i>
</pre>
<br />
<span>I left the remaining settings as the default ones; for example, the following are of main interest:</span><br />
<br />
<pre>
# If during a power failure, the remaining battery percentage
# (as reported by the UPS) is below or equal to BATTERYLEVEL,
# apcupsd will initiate a system shutdown.
BATTERYLEVEL 5

# If during a power failure, the remaining runtime in minutes
# (as calculated internally by the UPS) is below or equal to MINUTES,
# apcupsd, will initiate a system shutdown.
MINUTES 3
</pre>
<br />
<span>I then enabled and started the daemon:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/usr/local/etc/apcupsd % doas sysrc apcupsd_enable=YES
apcupsd_enable:  -&gt; YES
paul@f0:/usr/local/etc/apcupsd % doas service apcupsd start
Starting apcupsd.
</pre>
<br />
<h3 style='display: inline' id='ups-connectivity-test'>UPS Connectivity Test</h3><br />
<br />
<span>And voila, I could now access the UPS information via the <span class='inlinecode'>apcaccess</span> command; how convenient :-) (I also read through the manual page, which provides a good understanding of what else can be done with it!).</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % apcaccess
APC      : <font color="#000000">001</font>,<font color="#000000">035</font>,<font color="#000000">0857</font>
DATE     : <font color="#000000">2025</font>-<font color="#000000">01</font>-<font color="#000000">26</font> <font color="#000000">14</font>:<font color="#000000">43</font>:<font color="#000000">27</font> +<font color="#000000">0200</font>
HOSTNAME : f0.lan.buetow.org
VERSION  : <font color="#000000">3.14</font>.<font color="#000000">14</font> (<font color="#000000">31</font> May <font color="#000000">2016</font>) freebsd
UPSNAME  : f0.lan.buetow.org
CABLE    : USB Cable
DRIVER   : USB UPS Driver
UPSMODE  : Stand Alone
STARTTIME: <font color="#000000">2025</font>-<font color="#000000">01</font>-<font color="#000000">26</font> <font color="#000000">14</font>:<font color="#000000">43</font>:<font color="#000000">25</font> +<font color="#000000">0200</font>
MODEL    : Back-UPS BX750MI
STATUS   : ONLINE
LINEV    : <font color="#000000">230.0</font> Volts
LOADPCT  : <font color="#000000">4.0</font> Percent
BCHARGE  : <font color="#000000">100.0</font> Percent
TIMELEFT : <font color="#000000">65.3</font> Minutes
MBATTCHG : <font color="#000000">5</font> Percent
MINTIMEL : <font color="#000000">3</font> Minutes
MAXTIME  : <font color="#000000">0</font> Seconds
SENSE    : Medium
LOTRANS  : <font color="#000000">145.0</font> Volts
HITRANS  : <font color="#000000">295.0</font> Volts
ALARMDEL : No alarm
BATTV    : <font color="#000000">13.6</font> Volts
LASTXFER : Automatic or explicit self <b><u><font color="#000000">test</font></u></b>
NUMXFERS : <font color="#000000">0</font>
TONBATT  : <font color="#000000">0</font> Seconds
CUMONBATT: <font color="#000000">0</font> Seconds
XOFFBATT : N/A
SELFTEST : NG
STATFLAG : <font color="#000000">0x05000008</font>
SERIALNO : 9B2414A03599
BATTDATE : <font color="#000000">2001</font>-<font color="#000000">01</font>-<font color="#000000">01</font>
NOMINV   : <font color="#000000">230</font> Volts
NOMBATTV : <font color="#000000">12.0</font> Volts
NOMPOWER : <font color="#000000">410</font> Watts
END APC  : <font color="#000000">2025</font>-<font color="#000000">01</font>-<font color="#000000">26</font> <font color="#000000">14</font>:<font color="#000000">44</font>:<font color="#000000">06</font> +<font color="#000000">0200</font>
</pre>
<br />
<h2 style='display: inline' id='apc-info-on-partner-nodes'>APC Info on Partner Nodes:</h2><br />
<br />
<span>So far, so good. Host <span class='inlinecode'>f0</span> would shut down itself when short on power. But what about the <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> nodes? They aren&#39;t connected directly to the UPS and, therefore, wouldn&#39;t know that their power is about to be cut off. For this, <span class='inlinecode'>apcupsd</span> running on the <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> nodes can be configured to retrieve UPS information via the network from the <span class='inlinecode'>apcupsd</span> server running on the <span class='inlinecode'>f0</span> node, which is connected directly to the APC via USB.</span><br />
<br />
<span>Of course, this won&#39;t work when <span class='inlinecode'>f0</span> is down. In this case, no operational node would be connected to the UPS via USB; therefore, the current power status would not be known. However, I consider this a rare circumstance. Furthermore, in case of an <span class='inlinecode'>f0</span> system crash, sudden power outages on the two other nodes would occur at different times making real data loss (the main concern here) less likely.</span><br />
<br />
<span>And if <span class='inlinecode'>f0</span> is down and <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> receive new data and crash midway, it&#39;s likely that a client (e.g., an Android app or another laptop) still has the data stored on it, making data recoverable and data loss overall nearly impossible. I&#39;d receive an alert if any of the nodes go down (more on monitoring later in this blog series).</span><br />
<br />
<h3 style='display: inline' id='installation-on-partners'>Installation on partners</h3><br />
<br />
<span>To do this, I installed <span class='inlinecode'>apcupsd</span> via <span class='inlinecode'>doas pkg install apcupsd</span> on <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>, and then I could connect to it this way:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:~ % apcaccess -h f0.lan.buetow.org | grep Percent
LOADPCT  : <font color="#000000">12.0</font> Percent
BCHARGE  : <font color="#000000">94.0</font> Percent
MBATTCHG : <font color="#000000">5</font> Percent
</pre>
<br />
<span>But I want the daemon to be configured and enabled in such a way that it connects to the master UPS node (the one with the UPS connected via USB) so that it can also initiate a system shutdown when the UPS battery reaches low levels. For that, <span class='inlinecode'>apcupsd</span> itself needs to be aware of the UPS status.</span><br />
<br />
<span>On <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>, I changed the configuration to use <span class='inlinecode'>f0</span> (where <span class='inlinecode'>apcupsd</span> is listening) as a remote device. I also changed the <span class='inlinecode'>MINUTES</span> setting from 3 to 6 and the <span class='inlinecode'>BATTERYLEVEL</span> setting from 5 to 10 to ensure that the <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> nodes could still connect to the <span class='inlinecode'>f0</span> node for UPS information before <span class='inlinecode'>f0</span> decides to shut down itself. So <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> must shut down earlier than <span class='inlinecode'>f0</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f2:/usr/local/etc/apcupsd % diff -u apcupsd.conf.sample apcupsd.conf
--- apcupsd.conf.sample <font color="#000000">2024</font>-<font color="#000000">11</font>-<font color="#000000">01</font> <font color="#000000">16</font>:<font color="#000000">40</font>:<font color="#000000">42.000000000</font> +<font color="#000000">0200</font>
+++ apcupsd.conf        <font color="#000000">2025</font>-<font color="#000000">01</font>-<font color="#000000">26</font> <font color="#000000">15</font>:<font color="#000000">52</font>:<font color="#000000">45.108469000</font> +<font color="#000000">0200</font>
@@ -<font color="#000000">31</font>,<font color="#000000">7</font> +<font color="#000000">31</font>,<font color="#000000">7</font> @@
 <i><font color="silver">#     940-1524C, 940-0024G, 940-0095A, 940-0095B,</font></i>
 <i><font color="silver">#     940-0095C, 940-0625A, M-04-02-2000</font></i>
 <i><font color="silver">#</font></i>
-UPSCABLE smart
+UPSCABLE ether

 <i><font color="silver"># To get apcupsd to work, in addition to defining the cable</font></i>
 <i><font color="silver"># above, you must also define a UPSTYPE, which corresponds to</font></i>
@@ -<font color="#000000">52</font>,<font color="#000000">7</font> +<font color="#000000">52</font>,<font color="#000000">6</font> @@
 <i><font color="silver">#                            Network Information Server. This is used if the</font></i>
 <i><font color="silver">#                            UPS powering your computer is connected to a</font></i>
 <i><font color="silver">#                            different computer for monitoring.</font></i>
-<i><font color="silver">#</font></i>
 <i><font color="silver"># snmp      hostname:port:vendor:community</font></i>
 <i><font color="silver">#                            SNMP network link to an SNMP-enabled UPS device.</font></i>
 <i><font color="silver">#                            Hostname is the ip address or hostname of the UPS</font></i>
@@ -<font color="#000000">88</font>,<font color="#000000">8</font> +<font color="#000000">87</font>,<font color="#000000">8</font> @@
 <i><font color="silver">#                            that apcupsd binds to that particular unit</font></i>
 <i><font color="silver">#                            (helpful if you have more than one USB UPS).</font></i>
 <i><font color="silver">#</font></i>
-UPSTYPE apcsmart
-DEVICE /dev/usv
+UPSTYPE net
+DEVICE f0.lan.buetow.org:<font color="#000000">3551</font>

 <i><font color="silver"># POLLTIME &lt;int&gt;</font></i>
 <i><font color="silver">#   Interval (in seconds) at which apcupsd polls the UPS for status. This</font></i>
@@ -<font color="#000000">147</font>,<font color="#000000">12</font> +<font color="#000000">146</font>,<font color="#000000">12</font> @@
 <i><font color="silver"># If during a power failure, the remaining battery percentage</font></i>
 <i><font color="silver"># (as reported by the UPS) is below or equal to BATTERYLEVEL,</font></i>
 <i><font color="silver"># apcupsd will initiate a system shutdown.</font></i>
-BATTERYLEVEL <font color="#000000">5</font>
+BATTERYLEVEL <font color="#000000">10</font>

 <i><font color="silver"># If during a power failure, the remaining runtime in minutes</font></i>
 <i><font color="silver"># (as calculated internally by the UPS) is below or equal to MINUTES,</font></i>
 <i><font color="silver"># apcupsd, will initiate a system shutdown.</font></i>
-MINUTES <font color="#000000">3</font>
+MINUTES <font color="#000000">6</font>

 <i><font color="silver"># If during a power failure, the UPS has run on batteries for TIMEOUT</font></i>
 <i><font color="silver"># many seconds or longer, apcupsd will initiate a system shutdown.</font></i>

</pre>
<span>So I also ran the following commands on <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:/usr/local/etc/apcupsd % doas sysrc apcupsd_enable=YES
apcupsd_enable:  -&gt; YES
paul@f1:/usr/local/etc/apcupsd % doas service apcupsd start
Starting apcupsd.
</pre>
<br />
<span>And then I was able to connect to localhost via the <span class='inlinecode'>apcaccess</span> command:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f1:~ % doas apcaccess | grep Percent
LOADPCT  : <font color="#000000">5.0</font> Percent
BCHARGE  : <font color="#000000">95.0</font> Percent
MBATTCHG : <font color="#000000">5</font> Percent
</pre>
<br />
<h2 style='display: inline' id='power-outage-simulation'>Power outage simulation</h2><br />
<br />
<h3 style='display: inline' id='pulling-the-plug'>Pulling the plug</h3><br />
<br />
<span>I simulated a power outage by removing the power input from the APC. Immediately, the following message appeared on all the nodes:</span><br />
<br />
<pre>
Broadcast Message from root@f0.lan.buetow.org
        (no tty) at 15:03 EET...

Power failure. Running on UPS batteries.                                              
</pre>
<br />
<span>I ran the following command to confirm the available battery time:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:/usr/local/etc/apcupsd % apcaccess -p TIMELEFT
<font color="#000000">63.9</font> Minutes
</pre>
<br />
<span>And after around one hour (<span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span> a bit earlier, <span class='inlinecode'>f0</span> a bit later due to the different <span class='inlinecode'>BATTERYLEVEL</span> and <span class='inlinecode'>MINUTES</span> settings outlined earlier), the following broadcast was sent out:</span><br />
<br />
<pre>
Broadcast Message from root@f0.lan.buetow.org
        (no tty) at 15:08 EET...

        *** FINAL System shutdown message from root@f0.lan.buetow.org ***

System going down IMMEDIATELY

apcupsd initiated shutdown
</pre>
<br />
<span>And all the nodes shut down safely before the UPS ran out of battery!</span><br />
<br />
<h3 style='display: inline' id='restoring-power'>Restoring power</h3><br />
<br />
<span>After restoring power, I checked the logs in <span class='inlinecode'>/var/log/daemon.log</span> and found the following on all 3 nodes:</span><br />
<br />
<pre>
Jan 26 17:36:24 f2 apcupsd[2159]: Power failure.
Jan 26 17:36:30 f2 apcupsd[2159]: Running on UPS batteries.
Jan 26 17:36:30 f2 apcupsd[2159]: Battery charge below low limit.
Jan 26 17:36:30 f2 apcupsd[2159]: Initiating system shutdown!
Jan 26 17:36:30 f2 apcupsd[2159]: User logins prohibited
Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd exiting, signal 15
Jan 26 17:36:32 f2 apcupsd[2159]: apcupsd shutdown succeeded
</pre>
<br />
<span>All good :-)</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>I have the same UPS (but with a bit more capacity) for my main work setup, which powers my 28" screen, music equipment, etc. It has already been helpful a couple of times during power outages here, so I am sure that the smaller UPS for the F3s setup will be of great use.</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<br />
<span>Other BSD related posts are:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts (You are currently reading this)</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Working with an SRE Interview</title>
        <link href="https://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.html" />
        <id>https://foo.zone/gemfeed/2025-01-15-working-with-an-sre-interview.html</id>
        <updated>2025-01-15T00:16:04+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I have been interviewed by Florian Buetow on `cracking-ai-engineering.com` about what it's like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='working-with-an-sre-interview'>Working with an SRE Interview</h1><br />
<br />
<span class='quote'>Published at 2025-01-15T00:16:04+02:00</span><br />
<br />
<span>I have been interviewed by Florian Buetow on <span class='inlinecode'>cracking-ai-engineering.com</span> about what it&#39;s like working with a Site Reliability Engineer from the point of view of a Software Engineer, Data Scientist, and AI Engineer.</span><br />
<br />
<a class='textlink' href='https://www.cracking-ai-engineering.com/writing/2025/01/12/working-with-an-sre-interview/'>See original interview here</a><br />
<a class='textlink' href='https://www.cracking-ai-engineering.com'>Cracking AI Engineering</a><br />
<br />
<span>Below, I am posting the interview here on my blog as well.</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#working-with-an-sre-interview'>Working with an SRE Interview</a></li>
<li>⇢ <a href='#preamble-'>Preamble </a></li>
<li>⇢ <a href='#introducing-paul'>Introducing Paul</a></li>
<li>⇢ <a href='#how-did-you-get-started'>How did you get started?</a></li>
<li>⇢ <a href='#roles-and-career-progression'>Roles and Career Progression</a></li>
<li>⇢ <a href='#anecdotes-and-best-practices'>Anecdotes and Best Practices</a></li>
<li>⇢ <a href='#working-with-different-teams'>Working with Different Teams</a></li>
<li>⇢ <a href='#using-ai-tools'>Using AI Tools</a></li>
<li>⇢ <a href='#sre-learning-resources'>SRE Learning Resources</a></li>
<li>⇢ <a href='#blogging'>Blogging</a></li>
<li>⇢ <a href='#wrap-up'>Wrap-up</a></li>
<li>⇢ <a href='#closing-comments'>Closing comments</a></li>
</ul><br />
<h2 style='display: inline' id='preamble-'>Preamble </h2><br />
<br />
<span>Florian from Cracking AI Engineering interviewed me about my work as a Principal SRE at Mimecast. We talked about what an Embedded SRE actually does, automation, observability, incident management, and how to work well with an SRE — whether you&#39;re a developer, data scientist, or manager.</span><br />
<br />
<h2 style='display: inline' id='introducing-paul'>Introducing Paul</h2><br />
<br />
<span>Hi Paul, please introduce yourself briefly to the audience. Who are you, what do you do for a living, and where do you work?</span><br />
<br />
<span class='quote'>My name is Paul Bütow, I work at Mimecast, and I’m a Principal Site Reliability Engineer there. I’ve been with Mimecast for almost ten years now. The company specializes in email security, including things like archiving, phishing detection, malware protection, and spam filtering.</span><br />
<br />
<span>You mentioned that you’re an ‘Embedded SRE.’ What does that mean exactly?</span><br />
<br />
<span class='quote'>It means that I’m directly part of the software engineering team, not in a separate Ops department. I ensure that nothing is deployed manually, and everything runs through automation. I also set up monitoring and observability. These are two distinct aspects: monitoring alerts us when something breaks, while observability helps us identify trends. I also create runbooks so we know what to do when specific incidents occur frequently.</span><br />
<br />
<span class='quote'>Infrastructure SREs on the other hand handle the foundational setup, like providing the Kubernetes cluster itself or ensuring the operating systems are installed. They don&#39;t work on the application directly but ensure the base infrastructure is there for others to use. This works well when a company has multiple teams that need shared infrastructure.</span><br />
<br />
<h2 style='display: inline' id='how-did-you-get-started'>How did you get started?</h2><br />
<br />
<span>How did your interest in Linux or FreeBSD start?</span><br />
<br />
<span class='quote'>It began during my school days. We had a PC with DOS at home, and I eventually bought Suse Linux 5.3. Shortly after, I discovered FreeBSD because I liked its handbook so much. I wanted to understand exactly how everything worked, so I also tried Linux from Scratch. That involves installing every package manually to gain a better understanding of operating systems.</span><br />
<br />
<a class='textlink' href='https://www.FreeBSD.org'>https://www.FreeBSD.org</a><br />
<a class='textlink' href='https://linuxfromscratch.org/'>https://linuxfromscratch.org/</a><br />
<br />
<span>And after school, you pursued computer science, correct?</span><br />
<br />
<span class='quote'>Exactly. I wasn’t sure at first whether I wanted to be a software developer or a system administrator. I applied for both and eventually accepted an offer as a Linux system administrator. This was before &#39;SRE&#39; became a buzzword, but much of what I did back then-automation, infrastructure as code, monitoring-is now considered part of the typical SRE role.</span><br />
<br />
<h2 style='display: inline' id='roles-and-career-progression'>Roles and Career Progression</h2><br />
<br />
<span>Tell us about how you joined Mimecast. When did you fully embrace the SRE role?</span><br />
<br />
<span class='quote'>I started as a Linux sysadmin at 1&amp;1. I managed an ad server farm with hundreds of systems and later handled load balancers. Together with an architect, we managed F5 load balancers distributing around 2,000 services, including for portals like web.de and GMX. I also led the operations team technically for a while before moving to London to join Mimecast.</span><br />
<br />
<span class='quote'>At Mimecast, the job title was explicitly &#39;Site Reliability Engineer.&#39; The biggest difference was that I was no longer in a separate Ops department but embedded directly within the storage and search backend team. I loved that because we could plan features together-from automation to measurability and observability. Mimecast also operates thousands of physical servers for email archiving, which was fascinating since I already had experience with large distributed systems at 1&amp;1. It was the right step for me because it allowed me to work close to the code while remaining hands-on with infrastructure.</span><br />
<br />
<span>What are the differences between SRE, DevOps, SysAdmin, and Architects?</span><br />
<br />
<span class='quote'>SREs are like the next step after SysAdmins. A SysAdmin might manually install servers, replace disks, or use simple scripts for automation, while SREs use infrastructure as code and focus on reliability through SLIs, SLOs, and automation. DevOps isn’t really a job-it’s more of a way of working, where developers are involved in operations tasks like setting up CI/CD pipelines or on-call shifts. Architects focus on designing systems and infrastructures, such as load balancers or distributed systems, working alongside SREs to ensure the systems meet the reliability and scalability requirements. The specific responsibilities of each role depend on the company, and there is often overlap. </span><br />
<br />
<span>What are the most important reliability lessons you’ve learned so far?</span><br />
<br />
<ul>
<li>Don’t leave SRE aspects as an afterthought. It’s much better to discuss automation, monitoring, SLIs, and SLOs early on. Traditional sysadmins often installed systems manually, but today, we do everything via infrastructure as code-using tools like Terraform or Puppet.</li>
<li>I also distinguish between monitoring and observability. Monitoring tells us, &#39;The server is down, alarm!&#39; Observability dives deeper, showing trends like increasing latency so we can act proactively.</li>
<li>SLI, SLO, and SLA are core elements. We focus on what users actually experience-for example, how quickly an email is sent-and set our goals accordingly.</li>
<li>Runbooks are also crucial. When something goes wrong at night, you don’t want to start from scratch. A runbook outlines how to debug and resolve specific problems, saving time and reducing downtime.</li>
</ul><br />
<h2 style='display: inline' id='anecdotes-and-best-practices'>Anecdotes and Best Practices</h2><br />
<br />
<span>Runbooks sound very practical. Can you explain how they’re used day-to-day?</span><br />
<br />
<span class='quote'>Runbooks are essentially guides for handling specific incidents. For instance, if a service won’t start, the runbook will specify where the logs are and which commands to use. Observability takes it a step further, helping us spot changes early-like rising error rates or latency-so we can address issues before they escalate.</span><br />
<br />
<span>When should you decide to put something into a runbook, and when is it unnecessary?</span><br />
<br />
<span class='quote'>If an issue happens frequently, it should be documented in a runbook so that anyone, even someone new, can follow the steps to fix it. The idea is that 90% of the common incidents should be covered. For example, if a service is down, the runbook would specify where to find logs, which commands to check, and what actions to take. On the other hand, rare or complex issues, where the resolution depends heavily on context or varies each time, don’t make sense to include in detail. For those, it’s better to focus on general troubleshooting steps. </span><br />
<br />
<span>How do you search for and find the correct runbooks?</span><br />
<br />
<span class='quote'>Runbooks should be linked directly in the alert you receive. For example, if you get an alert about a service not running, the alert will have a link to the runbook that tells you what to check, like logs or commands to run. Runbooks are best stored in an internal wiki, so if you don’t find the link in the alert, you know where to search. The important thing is that runbooks are easy to find and up to date because that’s what makes them useful during incidents. </span><br />
<br />
<span>Do you have an interesting war story you can share with us?</span><br />
<br />
<span class='quote'>Sure. At 1&amp;1, we had a proprietary ad server software that ran a SQL query during startup. The query got slower over time, eventually timing out and preventing the server from starting. Since we couldn’t access the source code, we searched the binary for the SQL and patched it. By pinpointing the issue, a developer was able to adjust the SQL. This collaboration between sysadmin and developer perspectives highlights the value of SRE work.</span><br />
<br />
<h2 style='display: inline' id='working-with-different-teams'>Working with Different Teams</h2><br />
<br />
<span>You’re embedded in a team-how does collaboration with developers work practically?</span><br />
<br />
<span class='quote'>We plan everything together from the start. If there’s a new feature, we discuss infrastructure, automated deployments, and monitoring right away. Developers are experts in the code, and I bring the infrastructure expertise. This avoids unpleasant surprises before going live.</span><br />
<br />
<span>How about working with data scientists or ML engineers? Are there differences?</span><br />
<br />
<span class='quote'>The principles are the same. ML models also need to be deployed and monitored. You deal with monitoring, resource allocation, and identifying performance drops. Whether it’s a microservice or an ML job, at the end of the day, it’s all running on servers or clusters that must remain stable.</span><br />
<br />
<span>What about working with managers or the FinOps team?</span><br />
<br />
<span class='quote'>We often discuss costs, especially in the cloud, where scaling up resources is easy. It’s crucial to know our metrics: do we have enough capacity? Do we need all instances? Or is the CPU only at 5% utilization? This data helps managers decide whether the budget is sufficient or if optimizations are needed.</span><br />
<br />
<span>Do you have practical tips for working with SREs?</span><br />
<br />
<span class='quote'>Yes, I have a few:</span><br />
<br />
<ul>
<li>Early involvement: Include SREs from the beginning in your project.</li>
<li>Runbooks &amp; documentation: Document recurring errors.</li>
<li>Try first: Try to understand the issue yourself before immediately asking the SRE.</li>
<li>Basic infra knowledge: Kubernetes and Terraform aren’t magic. Some basic understanding helps every developer.</li>
</ul><br />
<h2 style='display: inline' id='using-ai-tools'>Using AI Tools</h2><br />
<br />
<span>Let’s talk about AI. How do you use it in your daily work?</span><br />
<br />
<span class='quote'>For boilerplate code, like Terraform snippets, I often use ChatGPT. It saves time, although I always review and adjust the output. Log analysis is another exciting application. Instead of manually going through millions of lines, AI can summarize key outliers or errors.</span><br />
<br />
<span>Do you think AI could largely replace SREs or significantly change the role?</span><br />
<br />
<span class='quote'>I see AI as an additional tool. SRE requires a deep understanding of how distributed systems work internally. While AI can assist with routine tasks or quickly detect anomalies, human expertise is indispensable for complex issues.</span><br />
<br />
<h2 style='display: inline' id='sre-learning-resources'>SRE Learning Resources</h2><br />
<br />
<span>What resources would you recommend for learning about SRE?</span><br />
<br />
<span class='quote'>The Google SRE book is a classic, though a bit dry. I really like &#39;Seeking SRE,&#39; as it offers various perspectives on SRE, with many practical stories from different companies.</span><br />
<br />
<a class='textlink' href='https://sre.google/books/'>https://sre.google/books/</a><br />
<a class='textlink' href='https://www.oreilly.com/library/view/seeking-sre/9781491978856'>Seeking SRE</a><br />
<br />
<span>Do you have a podcast recommendation?</span><br />
<br />
<span class='quote'>The Google SRE prodcast is quite interesting. It offers insights into how Google approaches SRE, along with perspectives from external guests.</span><br />
<br />
<a class='textlink' href='https://sre.google/prodcast/'>https://sre.google/prodcast/</a><br />
<br />
<h2 style='display: inline' id='blogging'>Blogging</h2><br />
<br />
<span>You also have a blog. What motivates you to write regularly?</span><br />
<br />
<span class='quote'>Writing helps me learn the most. It also serves as a personal reference. Sometimes I look up how I solved a problem a year ago. And of course, others tackling similar projects might find inspiration in my posts.</span><br />
<br />
<span>What do you blog about?</span><br />
<br />
<span class='quote'>Mostly technical topics I find exciting, like homelab projects, Kubernetes, or book summaries on IT and productivity. It’s a personal blog, so I write about what I enjoy.</span><br />
<br />
<h2 style='display: inline' id='wrap-up'>Wrap-up</h2><br />
<br />
<span>To wrap up, what are three things every team should keep in mind for stability?</span><br />
<br />
<span class='quote'>First, maintain runbooks and documentation to avoid chaos at night. Second, automate everything-manual installs in production are risky. Third, define SLIs, SLOs, and SLAs early so everyone knows what we’re monitoring and guaranteeing.</span><br />
<br />
<span>Is there a motto or mindset that particularly inspires you as an SRE?</span><br />
<br />
<span class='quote'>"Keep it simple and stupid"-KISS. Not everything has to be overly complex. And always stay curious. I’m still fascinated by how systems work under the hood.</span><br />
<br />
<span>Where can people find you online?</span><br />
<br />
<span class='quote'>You can find links to my socials on my website paul.buetow.org</span><br />
<span class='quote'>I regularly post articles and link to everything else I’m working on outside of work.</span><br />
<br />
<a class='textlink' href='https://paul.buetow.org'>https://paul.buetow.org</a><br />
<br />
<span>Thank you very much for your time and this insightful interview into the world of site reliability engineering</span><br />
<br />
<span class='quote'>My pleasure, this was fun.</span><br />
<br />
<h2 style='display: inline' id='closing-comments'>Closing comments</h2><br />
<br />
<span>Thanks for reading! Hopefully there’s something useful in here for your own work. Reliable systems are a team effort, after all.</span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> or contact Florian via the Cracking AI Engineering :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Posts from October to December 2024</title>
        <link href="https://foo.zone/gemfeed/2025-01-01-posts-from-october-to-december-2024.html" />
        <id>https://foo.zone/gemfeed/2025-01-01-posts-from-october-to-december-2024.html</id>
        <updated>2024-12-31T18:09:58+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Happy new year!</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='posts-from-october-to-december-2024'>Posts from October to December 2024</h1><br />
<br />
<span class='quote'>Published at 2024-12-31T18:09:58+02:00</span><br />
<br />
<span>Happy new year!</span><br />
<br />
<span>These are my social media posts from the last three months. I keep them here to reflect on them and also to not lose them. Social media networks come and go and are not under my control, but my domain is here to stay. </span><br />
<br />
<span>These are from Mastodon and LinkedIn. Have a look at my about page for my social media profiles. This list is generated with Gos, my social media platform sharing tool.</span><br />
<br />
<a class='textlink' href='../about/index.html'>My about page</a><br />
<a class='textlink' href='https://codeberg.org/snonux/gos'>https://codeberg.org/snonux/gos</a><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#posts-from-october-to-december-2024'>Posts from October to December 2024</a></li>
<li>⇢ <a href='#october-2024'>October 2024</a></li>
<li>⇢ ⇢ <a href='#first-on-call-experience-in-a-startup-doesn-t-'>First on-call experience in a startup. Doesn&#39;t ...</a></li>
<li>⇢ ⇢ <a href='#reviewing-your-own-pr-or-mr-before-asking-'>Reviewing your own PR or MR before asking ...</a></li>
<li>⇢ ⇢ <a href='#fun-with-defer-in-golang-i-did-t-know-that-'>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that ...</a></li>
<li>⇢ ⇢ <a href='#i-have-been-in-incidents-understandably-'>I have been in incidents. Understandably, ...</a></li>
<li>⇢ ⇢ <a href='#little-tips-using-strings-in-golang-and-i-'>Little tips using strings in <span class='inlinecode'>#golang</span> and I ...</a></li>
<li>⇢ ⇢ <a href='#reading-this-post-about-rust-especially-the-'>Reading this post about <span class='inlinecode'>#rust</span> (especially the ...</a></li>
<li>⇢ ⇢ <a href='#the-opposite-of-chaosmonkey--'>The opposite of <span class='inlinecode'>#ChaosMonkey</span> ... ...</a></li>
<li>⇢ <a href='#november-2024'>November 2024</a></li>
<li>⇢ ⇢ <a href='#i-just-became-a-silver-patreon-for-osnews-what-'>I just became a Silver Patreon for OSnews. What ...</a></li>
<li>⇢ ⇢ <a href='#until-now-i-wasn-t-aware-that-go-is-under-a-'>Until now, I wasn&#39;t aware, that Go is under a ...</a></li>
<li>⇢ ⇢ <a href='#these-are-some-book-notes-from-staff-engineer-'>These are some book notes from "Staff Engineer" ...</a></li>
<li>⇢ ⇢ <a href='#looking-at-kubernetes-it-s-pretty-much-'>Looking at <span class='inlinecode'>#Kubernetes</span>, it&#39;s pretty much ...</a></li>
<li>⇢ ⇢ <a href='#there-has-been-an-outage-at-the-upstream-'>There has been an outage at the upstream ...</a></li>
<li>⇢ ⇢ <a href='#one-of-the-more-confusing-parts-in-go-nil-'>One of the more confusing parts in Go, nil ...</a></li>
<li>⇢ ⇢ <a href='#agreeably-writing-down-with-diagrams-helps-you-'>Agreeably, writing down with Diagrams helps you ...</a></li>
<li>⇢ ⇢ <a href='#i-like-the-idea-of-types-in-ruby-raku-is-'>I like the idea of types in Ruby. Raku is ...</a></li>
<li>⇢ ⇢ <a href='#so-haskell-is-better-suited-for-general-'>So, <span class='inlinecode'>#Haskell</span> is better suited for general ...</a></li>
<li>⇢ ⇢ <a href='#at-first-functional-options-add-a-bit-of-'>At first, functional options add a bit of ...</a></li>
<li>⇢ ⇢ <a href='#revamping-my-home-lab-a-little-bit-freebsd-'>Revamping my home lab a little bit. <span class='inlinecode'>#freebsd</span> ...</a></li>
<li>⇢ ⇢ <a href='#wondering-to-which-web-browser-i-should-'>Wondering to which <span class='inlinecode'>#web</span> <span class='inlinecode'>#browser</span> I should ...</a></li>
<li>⇢ ⇢ <a href='#eks-node-viewer-is-a-nifty-tool-showing-the-'>eks-node-viewer is a nifty tool, showing the ...</a></li>
<li>⇢ ⇢ <a href='#have-put-more-photos-on---on-my-static-photo-'>Have put more Photos on - On my static photo ...</a></li>
<li>⇢ ⇢ <a href='#in-go-passing-pointers-are-not-automatically-'>In Go, passing pointers are not automatically ...</a></li>
<li>⇢ ⇢ <a href='#myself-being-part-of-an-on-call-rotations-over-'>Myself being part of an on-call rotations over ...</a></li>
<li>⇢ ⇢ <a href='#feels-good-to-code-in-my-old-love-perl-again-'>Feels good to code in my old love <span class='inlinecode'>#Perl</span> again ...</a></li>
<li>⇢ ⇢ <a href='#this-is-an-interactive-summary-of-the-go-'>This is an interactive summary of the Go ...</a></li>
<li>⇢ <a href='#december-2024'>December 2024</a></li>
<li>⇢ ⇢ <a href='#thats-unexpected-you-cant-remove-a-nan-key-'>Thats unexpected, you cant remove a NaN key ...</a></li>
<li>⇢ ⇢ <a href='#my-second-blog-post-about-revamping-my-home-lab-'>My second blog post about revamping my home lab ...</a></li>
<li>⇢ ⇢ <a href='#very-insightful-article-about-tech-hiring-in-'>Very insightful article about tech hiring in ...</a></li>
<li>⇢ ⇢ <a href='#for-bpf-ebpf-performance-debugging-have-'>for <span class='inlinecode'>#bpf</span> <span class='inlinecode'>#ebpf</span> performance debugging, have ...</a></li>
<li>⇢ ⇢ <a href='#89-things-heshe-knows-about-git-commits-is-a-'>89 things he/she knows about Git commits is a ...</a></li>
<li>⇢ ⇢ <a href='#i-found-that-working-on-multiple-side-projects-'>I found that working on multiple side projects ...</a></li>
<li>⇢ ⇢ <a href='#agreed-agreed-besides-ruby-i-would-also-'>Agreed? Agreed. Besides <span class='inlinecode'>#Ruby</span>, I would also ...</a></li>
<li>⇢ ⇢ <a href='#plan9-assembly-format-in-go-but-wait-it-s-not-'>Plan9 assembly format in Go, but wait, it&#39;s not ...</a></li>
<li>⇢ ⇢ <a href='#this-is-a-neat-blog-post-about-the-helix-text-'>This is a neat blog post about the Helix text ...</a></li>
<li>⇢ ⇢ <a href='#this-blog-post-is-basically-a-rant-against-'>This blog post is basically a rant against ...</a></li>
<li>⇢ ⇢ <a href='#quick-trick-to-get-helix-themes-selected-'>Quick trick to get Helix themes selected ...</a></li>
<li>⇢ ⇢ <a href='#example-where-complexity-attacks-you-from-'>Example where complexity attacks you from ...</a></li>
<li>⇢ ⇢ <a href='#llms-for-ops-summaries-of-logs-probabilities-'>LLMs for Ops? Summaries of logs, probabilities ...</a></li>
<li>⇢ ⇢ <a href='#excellent-article-about-your-dream-product-'>Excellent article about your dream Product ...</a></li>
<li>⇢ ⇢ <a href='#i-just-finished-reading-all-chapters-of-cpu-'>I just finished reading all chapters of CPU ...</a></li>
<li>⇢ ⇢ <a href='#indeed-useful-to-know-this-stuff-sre-'>Indeed, useful to know this stuff! <span class='inlinecode'>#sre</span> ...</a></li>
<li>⇢ ⇢ <a href='#it-s-the-small-things-which-make-unix-like-'>It&#39;s the small things, which make Unix like ...</a></li>
<li>⇢ ⇢ <a href='#my-new-year-s-resolution-is-not-to-start-any-'>My New Year&#39;s resolution is not to start any ...</a></li>
</ul><br />
<h2 style='display: inline' id='october-2024'>October 2024</h2><br />
<br />
<h3 style='display: inline' id='first-on-call-experience-in-a-startup-doesn-t-'>First on-call experience in a startup. Doesn&#39;t ...</h3><br />
<br />
<span>First on-call experience in a startup. Doesn&#39;t sound a lot of fun! But the lessons were learned! <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/lessons-from-my-first-on-call/'>ntietz.com/blog/lessons-from-my-first-on-call/</a><br />
<br />
<h3 style='display: inline' id='reviewing-your-own-pr-or-mr-before-asking-'>Reviewing your own PR or MR before asking ...</h3><br />
<br />
<span>Reviewing your own PR or MR before asking others to review it makes a lot of sense. Have seen so many silly mistakes which would have been avoided. Saving time for the real reviewer.</span><br />
<br />
<a class='textlink' href='https://www.jvt.me/posts/2019/01/12/self-code-review/'>www.jvt.me/posts/2019/01/12/self-code-review/</a><br />
<br />
<h3 style='display: inline' id='fun-with-defer-in-golang-i-did-t-know-that-'>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that ...</h3><br />
<br />
<span>Fun with defer in <span class='inlinecode'>#golang</span>, I did&#39;t know, that a defer object can either be heap or stack allocated. And there are some rules for inlining, too.</span><br />
<br />
<a class='textlink' href='https://victoriametrics.com/blog/defer-in-go/'>victoriametrics.com/blog/defer-in-go/</a><br />
<br />
<h3 style='display: inline' id='i-have-been-in-incidents-understandably-'>I have been in incidents. Understandably, ...</h3><br />
<br />
<span>I have been in incidents. Understandably, everyone wants the issue to be resolved as quickly and others want to know how long TTR will be. IMHO, providing no estimates at all is no solution either. So maybe give a rough estimate but clearly communicate that the estimate is rough and that X, Y, and Z can interfere, meaning there is a chance it will take longer to resolve the incident. Just my thought. What&#39;s yours?</span><br />
<br />
<a class='textlink' href='https://firehydrant.com/blog/hot-take-dont-provide-incident-resolution-estimates/'>firehydrant.com/blog/hot-take-dont-provide-incident-resolution-estimates/</a><br />
<br />
<h3 style='display: inline' id='little-tips-using-strings-in-golang-and-i-'>Little tips using strings in <span class='inlinecode'>#golang</span> and I ...</h3><br />
<br />
<span>Little tips using strings in <span class='inlinecode'>#golang</span> and I personally think one must look more into the std lib (not just for strings, also for slices, maps,...), there are tons of useful helper functions.</span><br />
<br />
<a class='textlink' href='https://www.calhoun.io/6-tips-for-using-strings-in-go/'>www.calhoun.io/6-tips-for-using-strings-in-go/</a><br />
<br />
<h3 style='display: inline' id='reading-this-post-about-rust-especially-the-'>Reading this post about <span class='inlinecode'>#rust</span> (especially the ...</h3><br />
<br />
<span>Reading this post about <span class='inlinecode'>#rust</span> (especially the first part), I think I made a good choice in deciding to dive into <span class='inlinecode'>#golang</span> instead. There was a point where I wanted to learn a new programming language, and Rust was on my list of choices. I think the Go project does a much better job of deciding what goes into the language and how. What are your thoughts?</span><br />
<br />
<a class='textlink' href='https://josephg.com/blog/rewriting-rust/'>josephg.com/blog/rewriting-rust/</a><br />
<br />
<h3 style='display: inline' id='the-opposite-of-chaosmonkey--'>The opposite of <span class='inlinecode'>#ChaosMonkey</span> ... ...</h3><br />
<br />
<span>The opposite of <span class='inlinecode'>#ChaosMonkey</span> ... automatically repairing and healing services helping to reduce manual toil work. Runbooks and scripts are only the first step, followed by a fully blown service written in Go. Could be useful, but IMHO why not rather address the root causes of the manual toil work? <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://blog.cloudflare.com/nl-nl/improving-platform-resilience-at-cloudflare/'>blog.cloudflare.com/nl-nl/improving-platform-resilience-at-cloudflare/</a><br />
<br />
<h2 style='display: inline' id='november-2024'>November 2024</h2><br />
<br />
<h3 style='display: inline' id='i-just-became-a-silver-patreon-for-osnews-what-'>I just became a Silver Patreon for OSnews. What ...</h3><br />
<br />
<span>I just became a Silver Patreon for OSnews. What is OSnews? It is an independent news site about IT. It is slightly independent and, at times, alternative. I have enjoyed it since my early student days. This one and other projects I financially support are listed here:</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-09-07-projects-i-support.gmi'>foo.zone/gemfeed/2024-09-07-projects-i-support.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-09-07-projects-i-support.html'>foo.zone/gemfeed/2024-09-07-projects-i-support.html</a><br />
<br />
<h3 style='display: inline' id='until-now-i-wasn-t-aware-that-go-is-under-a-'>Until now, I wasn&#39;t aware, that Go is under a ...</h3><br />
<br />
<span>Until now, I wasn&#39;t aware, that Go is under a BSD-style license (3-clause as it seems). Neat. I don&#39;t know why, but I always was under the impression it would be MIT. <span class='inlinecode'>#bsd</span> <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://go.dev/LICENSE'>go.dev/LICENSE</a><br />
<br />
<h3 style='display: inline' id='these-are-some-book-notes-from-staff-engineer-'>These are some book notes from "Staff Engineer" ...</h3><br />
<br />
<span>These are some book notes from "Staff Engineer" – there is some really good insight into what is expected from a Staff Engineer and beyond in the industry. I wish I had read the book earlier.</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.gmi'>foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html'>foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html</a><br />
<br />
<h3 style='display: inline' id='looking-at-kubernetes-it-s-pretty-much-'>Looking at <span class='inlinecode'>#Kubernetes</span>, it&#39;s pretty much ...</h3><br />
<br />
<span>Looking at <span class='inlinecode'>#Kubernetes</span>, it&#39;s pretty much following the Unix way of doing things. It has many tools, but each tool has its own single purpose: DNS, scheduling, container runtime, various controllers, networking, observability, alerting, and more services in the control plane. Everything is managed by different services or plugins, mostly running in their dedicated pods. They don&#39;t communicate through pipes, but network sockets, though. <span class='inlinecode'>#k8s</span></span><br />
<br />
<h3 style='display: inline' id='there-has-been-an-outage-at-the-upstream-'>There has been an outage at the upstream ...</h3><br />
<br />
<span>There has been an outage at the upstream network provider for OpenBSD.Amsterdam (hoster, I am using). This was the first real-world test for my KISS HA setup, and it worked flawlessly! All my sites and services failed over automatically to my other <span class='inlinecode'>#OpenBSD</span> VM!</span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.gmi'>foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html'>foo.zone/gemfeed/2024-04-01-KISS-high-availability-with-OpenBSD.html</a><br />
<a class='textlink' href='https://openbsd.amsterdam/'>openbsd.amsterdam/</a><br />
<br />
<h3 style='display: inline' id='one-of-the-more-confusing-parts-in-go-nil-'>One of the more confusing parts in Go, nil ...</h3><br />
<br />
<span>One of the more confusing parts in Go, nil values vs nil errors: <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://unexpected-go.com/nil-errors-that-are-non-nil-errors.html'>unexpected-go.com/nil-errors-that-are-non-nil-errors.html</a><br />
<br />
<h3 style='display: inline' id='agreeably-writing-down-with-diagrams-helps-you-'>Agreeably, writing down with Diagrams helps you ...</h3><br />
<br />
<span>Agreeably, writing down with Diagrams helps you to think things more through. And keeps others on the same page. Only worth for projects from a certain size, IMHO.</span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/reasons-to-write-design-docs/'>ntietz.com/blog/reasons-to-write-design-docs/</a><br />
<br />
<h3 style='display: inline' id='i-like-the-idea-of-types-in-ruby-raku-is-'>I like the idea of types in Ruby. Raku is ...</h3><br />
<br />
<span>I like the idea of types in Ruby. Raku is supports that already, but in Ruby, you must specify the types in a separate .rbs file, which is, in my opinion, cumbersome and is a reason not to use it extensively for now. I believe there are efforts to embed the type information in the standard .rb files, and that the .rbs is just an experiment to see how types could work out without introducing changes into the core Ruby language itself right now? <span class='inlinecode'>#Ruby</span> <span class='inlinecode'>#RakuLang</span></span><br />
<br />
<a class='textlink' href='https://github.com/ruby/rbs'>github.com/ruby/rbs</a><br />
<br />
<h3 style='display: inline' id='so-haskell-is-better-suited-for-general-'>So, <span class='inlinecode'>#Haskell</span> is better suited for general ...</h3><br />
<br />
<span>So, <span class='inlinecode'>#Haskell</span> is better suited for general purpose than <span class='inlinecode'>#Rust</span>? I thought deploying something in Haskell means publishing an academic paper :-) Interesting rant about Rust, though:</span><br />
<br />
<a class='textlink' href='https://chrisdone.com/posts/rust/'>chrisdone.com/posts/rust/</a><br />
<br />
<h3 style='display: inline' id='at-first-functional-options-add-a-bit-of-'>At first, functional options add a bit of ...</h3><br />
<br />
<span>At first, functional options add a bit of boilerplate, but they turn out to be quite neat, especially when you have very long parameter lists that need to be made neat and tidy. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://www.calhoun.io/using-functional-options-instead-of-method-chaining-in-go/'>www.calhoun.io/using-functional-options-instead-of-method-chaining-in-go/</a><br />
<br />
<h3 style='display: inline' id='revamping-my-home-lab-a-little-bit-freebsd-'>Revamping my home lab a little bit. <span class='inlinecode'>#freebsd</span> ...</h3><br />
<br />
<span>Revamping my home lab a little bit. <span class='inlinecode'>#freebsd</span> <span class='inlinecode'>#bhyve</span> <span class='inlinecode'>#rocky</span> <span class='inlinecode'>#linux</span> <span class='inlinecode'>#vm</span> <span class='inlinecode'>#k3s</span> <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#wireguard</span> <span class='inlinecode'>#zfs</span> <span class='inlinecode'>#nfs</span> <span class='inlinecode'>#ha</span> <span class='inlinecode'>#relayd</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#selfhosting</span> <span class='inlinecode'>#homelab</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.gmi'>foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html</a><br />
<br />
<h3 style='display: inline' id='wondering-to-which-web-browser-i-should-'>Wondering to which <span class='inlinecode'>#web</span> <span class='inlinecode'>#browser</span> I should ...</h3><br />
<br />
<span>Wondering to which <span class='inlinecode'>#web</span> <span class='inlinecode'>#browser</span> I should switch now personally ...</span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/141100/mozilla-foundation-lays-off-30-of-its-employees-ends-advocacy-for-open-web-privacy-and-more/'>www.osnews.com/story/141100/mozilla-fo..-..dvocacy-for-open-web-privacy-and-more/</a><br />
<br />
<h3 style='display: inline' id='eks-node-viewer-is-a-nifty-tool-showing-the-'>eks-node-viewer is a nifty tool, showing the ...</h3><br />
<br />
<span>eks-node-viewer is a nifty tool, showing the compute nodes currently in use in the <span class='inlinecode'>#EKS</span> cluster. especially useful when dynamically allocating nodes with <span class='inlinecode'>#karpenter</span> or auto scaling groups.</span><br />
<br />
<a class='textlink' href='https://github.com/awslabs/eks-node-viewer'>github.com/awslabs/eks-node-viewer</a><br />
<br />
<h3 style='display: inline' id='have-put-more-photos-on---on-my-static-photo-'>Have put more Photos on - On my static photo ...</h3><br />
<br />
<span>Have put more Photos on - On my static photo sites - Generated with a <span class='inlinecode'>#bash</span> script</span><br />
<br />
<a class='textlink' href='https://irregular.ninja'>irregular.ninja</a><br />
<br />
<h3 style='display: inline' id='in-go-passing-pointers-are-not-automatically-'>In Go, passing pointers are not automatically ...</h3><br />
<br />
<span>In Go, passing pointers are not automatically faster than values. Pointers often force the memory to be allocated on the heap, adding GC overhad. With values, Go can determine whether to put the memory on the stack instead. But with large structs/objects (how you want to call them) or if you want to modify state, then pointers are the semantic to use. <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://blog.boot.dev/golang/pointers-faster-than-values/'>blog.boot.dev/golang/pointers-faster-than-values/</a><br />
<br />
<h3 style='display: inline' id='myself-being-part-of-an-on-call-rotations-over-'>Myself being part of an on-call rotations over ...</h3><br />
<br />
<span>Myself being part of an on-call rotations over my whole professional life, just have learned this lesson "Tell people who are new to on-call: Just have fun" :-) This is a neat blog post to read:</span><br />
<br />
<a class='textlink' href='https://ntietz.com/blog/what-i-tell-people-new-to-oncall/'>ntietz.com/blog/what-i-tell-people-new-to-oncall/</a><br />
<br />
<h3 style='display: inline' id='feels-good-to-code-in-my-old-love-perl-again-'>Feels good to code in my old love <span class='inlinecode'>#Perl</span> again ...</h3><br />
<br />
<span>Feels good to code in my old love <span class='inlinecode'>#Perl</span> again after a while. I am implementing a log parser for generating site stats of my personal homepage! :-) @Perl</span><br />
<br />
<h3 style='display: inline' id='this-is-an-interactive-summary-of-the-go-'>This is an interactive summary of the Go ...</h3><br />
<br />
<span>This is an interactive summary of the Go release, with a lot of examples utilising iterators in the slices and map packages. Love it! <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://antonz.org/go-1-23/'>antonz.org/go-1-23/</a><br />
<br />
<h2 style='display: inline' id='december-2024'>December 2024</h2><br />
<br />
<h3 style='display: inline' id='thats-unexpected-you-cant-remove-a-nan-key-'>Thats unexpected, you cant remove a NaN key ...</h3><br />
<br />
<span>Thats unexpected, you cant remove a NaN key from a map without clearing it! <span class='inlinecode'>#golang</span></span><br />
<br />
<a class='textlink' href='https://unexpected-go.com/you-cant-remove-a-nan-key-from-a-map-without-clearing-it.html'>unexpected-go.com/you-cant-remove-a-nan-key-from-a-map-without-clearing-it.html</a><br />
<br />
<h3 style='display: inline' id='my-second-blog-post-about-revamping-my-home-lab-'>My second blog post about revamping my home lab ...</h3><br />
<br />
<span>My second blog post about revamping my home lab a little bit just hit the net. <span class='inlinecode'>#FreeBSD</span> <span class='inlinecode'>#ZFS</span> <span class='inlinecode'>#n100</span> <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#k3s</span> <span class='inlinecode'>#kubernetes</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi'>foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html</a><br />
<br />
<h3 style='display: inline' id='very-insightful-article-about-tech-hiring-in-'>Very insightful article about tech hiring in ...</h3><br />
<br />
<span>Very insightful article about tech hiring in the age of LLMs. As an interviewer, I have experienced some of the scrnarios already first hand...</span><br />
<br />
<a class='textlink' href='https://newsletter.pragmaticengineer.com/p/how-genai-changes-tech-hiring'>newsletter.pragmaticengineer.com/p/how-genai-changes-tech-hiring</a><br />
<br />
<h3 style='display: inline' id='for-bpf-ebpf-performance-debugging-have-'>for <span class='inlinecode'>#bpf</span> <span class='inlinecode'>#ebpf</span> performance debugging, have ...</h3><br />
<br />
<span>for <span class='inlinecode'>#bpf</span> <span class='inlinecode'>#ebpf</span> performance debugging, have a look at bpftop from Netflix. A neat tool showing you the estimated CPU time and other performance statistics for all the BPF programs currently loaded into the <span class='inlinecode'>#linux</span> kernel. Highly recommend!</span><br />
<br />
<a class='textlink' href='https://github.com/Netflix/bpftop'>github.com/Netflix/bpftop</a><br />
<br />
<h3 style='display: inline' id='89-things-heshe-knows-about-git-commits-is-a-'>89 things he/she knows about Git commits is a ...</h3><br />
<br />
<span>89 things he/she knows about Git commits is a neat list of <span class='inlinecode'>#Git</span> wisdoms</span><br />
<br />
<a class='textlink' href='https://www.jvt.me/posts/2024/07/12/things-know-commits/'>www.jvt.me/posts/2024/07/12/things-know-commits/</a><br />
<br />
<h3 style='display: inline' id='i-found-that-working-on-multiple-side-projects-'>I found that working on multiple side projects ...</h3><br />
<br />
<span>I found that working on multiple side projects concurrently is better than concentrating on just one. This seems inefficient at first, but whenever you tend to lose motivation, you can temporarily switch to another one with full élan. However, remember to stop starting and start finishing. This doesn&#39;t mean you should be working on 10+ (and a growing list of) side projects concurrently! Select your projects and commit to finishing them before starting the next thing. For example, my current limit of concurrent side projects is around five.</span><br />
<br />
<h3 style='display: inline' id='agreed-agreed-besides-ruby-i-would-also-'>Agreed? Agreed. Besides <span class='inlinecode'>#Ruby</span>, I would also ...</h3><br />
<br />
<span>Agreed? Agreed. Besides <span class='inlinecode'>#Ruby</span>, I would also add <span class='inlinecode'>#RakuLang</span> and <span class='inlinecode'>#Perl</span> @Perl to the list of languages that are great for shell scripts - "Making Easy Things Easy and Hard Things Possible"</span><br />
<br />
<a class='textlink' href='https://lucasoshiro.github.io/posts-en/2024-06-17-ruby-shellscript/'>lucasoshiro.github.io/posts-en/2024-06-17-ruby-shellscript/</a><br />
<br />
<h3 style='display: inline' id='plan9-assembly-format-in-go-but-wait-it-s-not-'>Plan9 assembly format in Go, but wait, it&#39;s not ...</h3><br />
<br />
<span>Plan9 assembly format in Go, but wait, it&#39;s not the Operating System Plan9! <span class='inlinecode'>#golang</span> <span class='inlinecode'>#rabbithole</span></span><br />
<br />
<a class='textlink' href='https://www.osnews.com/story/140941/go-plan9-memo-speeding-up-calculations-450/'>www.osnews.com/story/140941/go-plan9-memo-speeding-up-calculations-450/</a><br />
<br />
<h3 style='display: inline' id='this-is-a-neat-blog-post-about-the-helix-text-'>This is a neat blog post about the Helix text ...</h3><br />
<br />
<span>This is a neat blog post about the Helix text editor, to which I personally switched around a year ago (from NeoVim). I should blog about my experience as well. To summarize: I am using it together with the terminal multiplexer <span class='inlinecode'>#tmux</span>. It doesn&#39;t bother me that Helix is purely terminal-based and therefore everything has to be in the same font. <span class='inlinecode'>#HelixEditor</span></span><br />
<br />
<a class='textlink' href='https://jonathan-frere.com/posts/helix/'>jonathan-frere.com/posts/helix/</a><br />
<br />
<h3 style='display: inline' id='this-blog-post-is-basically-a-rant-against-'>This blog post is basically a rant against ...</h3><br />
<br />
<span>This blog post is basically a rant against DataDog... Personally, I don&#39;t have much experience with DataDog (actually, I have never used it), but one reason to work with logs at my day job (with over 2,000 physical server machines) and to be cost-effective is by using dtail! <span class='inlinecode'>#dtail</span> <span class='inlinecode'>#logs</span> <span class='inlinecode'>#logmanagement</span></span><br />
<br />
<a class='textlink' href='https://crys.site/blog/2024/reinventint-the-weel/'>crys.site/blog/2024/reinventint-the-weel/</a><br />
<a class='textlink' href='https://dtail.dev'>dtail.dev</a><br />
<br />
<h3 style='display: inline' id='quick-trick-to-get-helix-themes-selected-'>Quick trick to get Helix themes selected ...</h3><br />
<br />
<span>Quick trick to get Helix themes selected randomly <span class='inlinecode'>#HelixEditor</span></span><br />
<br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-12-15-random-helix-themes.gmi'>foo.zone/gemfeed/2024-12-15-random-helix-themes.html (Gemini)</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2024-12-15-random-helix-themes.html'>foo.zone/gemfeed/2024-12-15-random-helix-themes.html</a><br />
<br />
<h3 style='display: inline' id='example-where-complexity-attacks-you-from-'>Example where complexity attacks you from ...</h3><br />
<br />
<span>Example where complexity attacks you from behind <span class='inlinecode'>#k8s</span> <span class='inlinecode'>#kubernetes</span> <span class='inlinecode'>#OpenAI</span></span><br />
<br />
<a class='textlink' href='https://surfingcomplexity.blog/2024/12/14/quick-takes-on-the-recent-openai-public-incident-write-up/'>surfingcomplexity.blog/2024/12/14/quic..-..ecent-openai-public-incident-write-up/</a><br />
<br />
<h3 style='display: inline' id='llms-for-ops-summaries-of-logs-probabilities-'>LLMs for Ops? Summaries of logs, probabilities ...</h3><br />
<br />
<span>LLMs for Ops? Summaries of logs, probabilities about correctness, auto-generating Ansible, some uses cases are there. Wouldn&#39;t trust it fully, though.</span><br />
<br />
<a class='textlink' href='https://youtu.be/WodaffxVq-E?si=noY0egrfl5izCSQI'>youtu.be/WodaffxVq-E?si=noY0egrfl5izCSQI</a><br />
<br />
<h3 style='display: inline' id='excellent-article-about-your-dream-product-'>Excellent article about your dream Product ...</h3><br />
<br />
<span>Excellent article about your dream Product Manager: Why every software team needs a product manager to thrive via @wallabagapp</span><br />
<br />
<a class='textlink' href='https://testdouble.com/insights/why-product-managers-accelerate-improve-software-delivery'>testdouble.com/insights/why-product-ma..-..s-accelerate-improve-software-delivery</a><br />
<br />
<h3 style='display: inline' id='i-just-finished-reading-all-chapters-of-cpu-'>I just finished reading all chapters of CPU ...</h3><br />
<br />
<span>I just finished reading all chapters of CPU land: ... not claiming to remember every detail, but it is a great refresher how CPUs and operating systems actually work under the hood when you execute a program, which we tend to forget in our higher abstraction world. I liked the "story" and some of the jokes along the way! Size wise, it is pretty digestable (not talking about books, but only 7 web articles/chapters)! <span class='inlinecode'>#cpu</span> <span class='inlinecode'>#linux</span> <span class='inlinecode'>#unix</span> <span class='inlinecode'>#kernel</span> <span class='inlinecode'>#macOS</span></span><br />
<br />
<a class='textlink' href='https://cpu.land/'>cpu.land/</a><br />
<br />
<h3 style='display: inline' id='indeed-useful-to-know-this-stuff-sre-'>Indeed, useful to know this stuff! <span class='inlinecode'>#sre</span> ...</h3><br />
<br />
<span>Indeed, useful to know this stuff! <span class='inlinecode'>#sre</span></span><br />
<br />
<a class='textlink' href='https://biriukov.dev/docs/resolver-dual-stack-application/0-sre-should-know-about-gnu-linux-resolvers-and-dual-stack-applications/'>biriukov.dev/docs/resolver-dual-stack-..-..resolvers-and-dual-stack-applications/</a><br />
<br />
<h3 style='display: inline' id='it-s-the-small-things-which-make-unix-like-'>It&#39;s the small things, which make Unix like ...</h3><br />
<br />
<span>It&#39;s the small things, which make Unix like systems, like GNU/Linux, interesting. Didn&#39;t know about this <span class='inlinecode'>#GNU</span> <span class='inlinecode'>#Tar</span> behaviour yet:</span><br />
<br />
<a class='textlink' href='https://xeiaso.net/notes/2024/pop-quiz-tar/'>xeiaso.net/notes/2024/pop-quiz-tar/</a><br />
<br />
<h3 style='display: inline' id='my-new-year-s-resolution-is-not-to-start-any-'>My New Year&#39;s resolution is not to start any ...</h3><br />
<br />
<span>My New Year&#39;s resolution is not to start any new non-fiction books (or only very few) but to re-read and listen to my favorites, which I read to reflect on and see things from different perspectives. Every time you re-read a book, you gain new insights.&lt;nil&gt;17491</span><br />
<br />
<span>Other related posts:</span><br />
<br />
<a class='textlink' href='./2026-01-01-posts-from-july-to-december-2025.html'>2026-01-01 Posts from July to December 2025</a><br />
<a class='textlink' href='./2025-07-01-posts-from-january-to-june-2025.html'>2025-07-01 Posts from January to June 2025</a><br />
<a class='textlink' href='./2025-01-01-posts-from-october-to-december-2024.html'>2025-01-01 Posts from October to December 2024 (You are currently reading this)</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Random Helix Themes</title>
        <link href="https://foo.zone/gemfeed/2024-12-15-random-helix-themes.html" />
        <id>https://foo.zone/gemfeed/2024-12-15-random-helix-themes.html</id>
        <updated>2024-12-15T13:55:05+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='random-helix-themes'>Random Helix Themes</h1><br />
<br />
<span class='quote'>Published at 2024-12-15T13:55:05+02:00; Last updated 2024-12-18</span><br />
<br />
<span>I thought it would be fun to have a random Helix theme every time I open a new shell. Helix is the text editor I use.</span><br />
<br />
<a class='textlink' href='https://helix-editor.com/'>https://helix-editor.com/</a><br />
<br />
<span>So I put this into my <span class='inlinecode'>zsh</span> dotfiles (in some <span class='inlinecode'>editor.zsh.source</span> in my <span class='inlinecode'>~</span> directory):</span><br />
<br />
<br />
<span>So every time I open a new terminal or shell, <span class='inlinecode'>editor::helix::random_theme</span> gets called, which randomly selects a theme from all installed ones and updates the helix config accordingly.</span><br />
<br />
<br />
<h2 style='display: inline' id='a-better-version'>A better version</h2><br />
<br />
<span class='quote'>Update 2024-12-18: This is an improved version, which works cross platform (e.g., also on MacOS) and multiple theme directories:</span><br />
<br />
<br />
<span>I hope you had some fun. E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</title>
        <link href="https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html" />
        <id>https://foo.zone/gemfeed/2024-12-03-f3s-kubernetes-with-freebsd-part-2.html</id>
        <updated>2024-12-02T23:48:21+02:00, last updated Sun 11 Jan 10:30:00 EET 2026</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</h1><br />
<br />
<span class='quote'>Published at 2024-12-02T23:48:21+02:00, last updated Sun 11 Jan 10:30:00 EET 2026</span><br />
<br />
<span>This is the second blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</span><br />
<br />
<span>We set the stage last time; this time, we will set up the hardware for this project. </span><br />
<br />
<span>These are all the posts so far:</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<span class='quote'>ChatGPT generated logo..</span><br />
<br />
<span>Let&#39;s continue...</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-2-hardware-and-base-installation'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a></li>
<li><a href='#deciding-on-the-hardware'>Deciding on the hardware</a></li>
<li>⇢ <a href='#not-arm-but-intel-n100-'>Not ARM but Intel N100 </a></li>
<li>⇢ <a href='#beelink-unboxing'>Beelink unboxing</a></li>
<li>⇢ <a href='#network-switch'>Network switch</a></li>
<li><a href='#installing-freebsd'>Installing FreeBSD</a></li>
<li>⇢ <a href='#base-install'>Base install</a></li>
<li>⇢ <a href='#latest-patch-level-and-customizing-etchosts'>Latest patch level and customizing <span class='inlinecode'>/etc/hosts</span></a></li>
<li>⇢ <a href='#after-install'>After install</a></li>
<li>⇢ ⇢ <a href='#helix-editor'>Helix editor</a></li>
<li>⇢ ⇢ <a href='#doas'><span class='inlinecode'>doas</span></a></li>
<li>⇢ ⇢ <a href='#periodic-zfs-snapshotting'>Periodic ZFS snapshotting</a></li>
<li>⇢ ⇢ <a href='#uptime-tracking'>Uptime tracking</a></li>
<li><a href='#hardware-check'>Hardware check</a></li>
<li>⇢ <a href='#ethernet'>Ethernet</a></li>
<li>⇢ <a href='#ram'>RAM</a></li>
<li>⇢ <a href='#cpus'>CPUs</a></li>
<li>⇢ <a href='#cpu-throttling'>CPU throttling</a></li>
<li><a href='#wake-on-lan-setup'>Wake-on-LAN Setup</a></li>
<li>⇢ <a href='#setting-up-wol-on-the-laptop'>Setting up WoL on the laptop</a></li>
<li>⇢ <a href='#testing-wol-and-shutdown'>Testing WoL and Shutdown</a></li>
<li>⇢ <a href='#bios-configuration'>BIOS Configuration</a></li>
<li><a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h1 style='display: inline' id='deciding-on-the-hardware'>Deciding on the hardware</h1><br />
<br />
<span>Note that the OpenBSD VMs included in the f3s setup (which will be used later in this blog series for internet ingress - as you know from the first part of this blog series) are already there. These are virtual machines that I rent at OpenBSD Amsterdam and Hetzner.</span><br />
<br />
<a class='textlink' href='https://openbsd.amsterdam'>https://openbsd.amsterdam</a><br />
<a class='textlink' href='https://hetzner.cloud'>https://hetzner.cloud</a><br />
<br />
<span>This means that the FreeBSD boxes need to be covered, which will later be running k3s in Linux VMs via bhyve hypervisor.</span><br />
<br />
<span>I&#39;ve been considering whether to use Raspberry Pis or look for alternatives. It turns out that complete N100-based mini-computers aren&#39;t much more expensive than Raspberry Pi 5s, and they don&#39;t require assembly. Furthermore, I like that they are AMD64 and not ARM-based, which increases compatibility with some applications (e.g., I might want to virtualize Windows (via bhyve) on one of those, though that&#39;s out of scope for this blog series).</span><br />
<br />
<h2 style='display: inline' id='not-arm-but-intel-n100-'>Not ARM but Intel N100 </h2><br />
<br />
<span>I needed something compact, efficient, and capable enough to handle the demands of a small-scale Kubernetes cluster and preferably something I don&#39;t have to assemble a lot. After researching, I decided on the Beelink S12 Pro with Intel N100 CPUs.</span><br />
<br />
<a class='textlink' href='https://www.bee-link.com/products/beelink-mini-s12-pro-n100'>Beelink Mini S12 Pro N100 official page</a><br />
<br />
<span>The Intel N100 CPUs are built on the "Alder Lake-N" architecture. These chips are designed to balance performance and energy efficiency well. With four cores, they&#39;re more than capable of running multiple containers, even with moderate workloads. Plus, they consume only around 8W of power (ok, that&#39;s more than the Pis...), keeping the electricity bill low enough and the setup quiet - perfect for 24/7 operation.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-2/f3s-collage1.jpg'><img alt='Beelink preparation' title='Beelink preparation' src='./f3s-kubernetes-with-freebsd-part-2/f3s-collage1.jpg' /></a><br />
<br />
<span>The Beelink comes with the following specs:</span><br />
<br />
<ul>
<li>12th Gen Intel N100 processor, with four cores and four threads, and a maximum frequency of up to 3.4 GHz.</li>
<li>16 GB of DDR4 RAM, with a maximum (official) size of 16 GB (but people could install 32 GB on it).</li>
<li>500 GB M.2 SSD, with the option to install a 2nd 2.5 SSD drive (which I want to make use of later in this blog series).</li>
<li>GBit ethernet</li>
<li>Four USB 3.2 Gen2 ports (maybe I want to mount something externally at some point)</li>
<li>Dimensions and weight: 115*102*39mm, 280g</li>
<li>Silent cooling system.</li>
<li>HDMI output (needed only for the initial installation and maybe for troubleshooting later)</li>
<li>Auto power on via WoL (may make use of it)</li>
<li>Wi-Fi (not going to use it)</li>
</ul><br />
<span>I bought three (3) of them for the cluster I intend to build.</span><br />
<br />
<h2 style='display: inline' id='beelink-unboxing'>Beelink unboxing</h2><br />
<br />
<span>Unboxing was uneventful. Every Beelink PC came with: </span><br />
<br />
<ul>
<li>An AC power adapter</li>
<li>An HDMI cable</li>
<li>A VESA mount with screws (not using it as of now)</li>
<li>Some manuals</li>
<li>The pre-assembled Beelink PC itself.</li>
<li>A "Hello" post card (??)</li>
</ul><br />
<span>Overall, I love the small form factor.</span><br />
<br />
<h2 style='display: inline' id='network-switch'>Network switch</h2><br />
<br />
<span>I went with the tp-link mini 5-port switch, as I had a spare one available. That switch will be plugged into my wall ethernet port, which connects directly to my fiber internet router with 100 Mbit/s down and 50 Mbit/s upload speed.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-2/switch.jpg'><img alt='Switch' title='Switch' src='./f3s-kubernetes-with-freebsd-part-2/switch.jpg' /></a><br />
<br />
<h1 style='display: inline' id='installing-freebsd'>Installing FreeBSD</h1><br />
<br />
<h2 style='display: inline' id='base-install'>Base install</h2><br />
<br />
<span>First, I downloaded the boot-only ISO of the latest FreeBSD release and dumped it on a USB stick via my Fedora laptop:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[paul@earth]~/Downloads% sudo dd \
  <b><u><font color="#000000">if</font></u></b>=FreeBSD-<font color="#000000">14.1</font>-RELEASE-amd<font color="#000000">64</font>-bootonly.iso \
  of=/dev/sda conv=sync
</pre>
<br />
<span>Next, I plugged the Beelinks (one after another) into my monitor via HDMI (the resolution of the FreeBSD text console seems strangely stretched, as I am using the LG Dual Up monitor), connected Ethernet, an external USB keyboard, and the FreeBSD USB stick, and booted the devices up. With F7, I entered the boot menu and selected the USB stick for the FreeBSD installation.</span><br />
<br />
<span>The installation was uneventful. I selected:</span><br />
<br />
<ul>
<li>Guided ZFS on root (pool <span class='inlinecode'>zroot</span>)</li>
<li>Unencrypted ZFS (I will encrypt separate datasets later; I want it to be able to boot without manual interaction)</li>
<li>Static IP configuration (to ensure that the boxes always have the same IPs, even after switching the router/DHCP server)</li>
<li>I decided to enable the SSH daemon, NTP server, and NTP time synchronization at boot, and I also enabled <span class='inlinecode'>powerd</span> for automatic CPU frequency scaling.</li>
<li>In addition to <span class='inlinecode'>root,</span> I added a personal user, <span class='inlinecode'>paul,</span> whom I placed in the <span class='inlinecode'>wheel</span> group.</li>
</ul><br />
<span>After doing all that three times (once for each Beelink PC), I had three ready-to-use FreeBSD boxes! Their hostnames are <span class='inlinecode'>f0</span>, <span class='inlinecode'>f1</span> and <span class='inlinecode'>f2</span>!</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-2/f3s-collage2.jpg'><img alt='Beelink installation' title='Beelink installation' src='./f3s-kubernetes-with-freebsd-part-2/f3s-collage2.jpg' /></a><br />
<br />
<h2 style='display: inline' id='latest-patch-level-and-customizing-etchosts'>Latest patch level and customizing <span class='inlinecode'>/etc/hosts</span></h2><br />
<br />
<span>After the first boot, I upgraded to the latest FreeBSD patch level as follows:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># freebsd-update fetch</font></i>
root@f0:~ <i><font color="silver"># freebsd-update install</font></i>
root@f0:~ <i><font color="silver"># freebsd-update reboot</font></i>
</pre>
<br />
<span>I also added the following entries for the three FreeBSD boxes to the <span class='inlinecode'>/etc/hosts</span> file:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># cat &lt;&lt;END &gt;&gt;/etc/hosts</font></i>
<font color="#000000">192.168</font>.<font color="#000000">1.130</font> f0 f0.lan f0.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.131</font> f1 f1.lan f1.lan.buetow.org
<font color="#000000">192.168</font>.<font color="#000000">1.132</font> f2 f2.lan f2.lan.buetow.org
END
</pre>
<br />
<span>You might wonder why bother using the hosts file? Why not use DNS properly? The reason is simplicity. I don&#39;t manage 100 hosts, only a few here and there. Having an OpenWRT router in my home, I could also configure everything there, but maybe I&#39;ll do that later. For now, keep it simple and straightforward.</span><br />
<br />
<h2 style='display: inline' id='after-install'>After install</h2><br />
<br />
<span>After that, I installed the following additional packages:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># pkg install helix doas zfs-periodic uptimed</font></i>
</pre>
<br />
<h3 style='display: inline' id='helix-editor'>Helix editor</h3><br />
<br />
<span>Helix? It&#39;s my favourite text editor. I have nothing against <span class='inlinecode'>vi</span> but like <span class='inlinecode'>hx</span> (Helix) more!</span><br />
<br />
<a class='textlink' href='https://helix-editor.com/'>https://helix-editor.com/</a><br />
<br />
<h3 style='display: inline' id='doas'><span class='inlinecode'>doas</span></h3><br />
<br />
<span><span class='inlinecode'>doas</span>? It&#39;s a pretty neat (and KISS) replacement for <span class='inlinecode'>sudo</span>. It has far fewer features than <span class='inlinecode'>sudo</span>, which is supposed to make it more secure. Its origin is the OpenBSD project. For <span class='inlinecode'>doas</span>, I accepted the default configuration (where users in the <span class='inlinecode'>wheel</span> group are allowed to run commands as <span class='inlinecode'>root</span>):</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># cp /usr/local/etc/doas.conf.sample /usr/local/etc/doas.conf</font></i>
</pre>
<br />
<a class='textlink' href='https://man.openbsd.org/doas'>https://man.openbsd.org/doas</a><br />
<br />
<h3 style='display: inline' id='periodic-zfs-snapshotting'>Periodic ZFS snapshotting</h3><br />
<br />
<span><span class='inlinecode'>zfs-periodic</span> is a nifty tool for automatically creating ZFS snapshots. I decided to go with the following configuration here:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># cat &lt;&lt;END &gt;&gt;/etc/periodic.conf</font></i>
daily_zfs_snapshot_enable=<font color="#808080">"YES"</font>
daily_zfs_snapshot_pools=<font color="#808080">"zroot"</font>
daily_zfs_snapshot_keep=<font color="#808080">"7"</font>
weekly_zfs_snapshot_enable=<font color="#808080">"YES"</font>
weekly_zfs_snapshot_pools=<font color="#808080">"zroot"</font>
weekly_zfs_snapshot_keep=<font color="#808080">"5"</font>
monthly_zfs_snapshot_enable=<font color="#808080">"YES"</font>
monthly_zfs_snapshot_pools=<font color="#808080">"zroot"</font>
monthly_zfs_snapshot_keep=<font color="#808080">"6"</font>
END
</pre>
<br />
<a class='textlink' href='https://github.com/ross/zfs-periodic'>https://github.com/ross/zfs-periodic</a><br />
<br />
<span>Note: We have not added <span class='inlinecode'>zdata</span> to the list of snapshot pools. Currently, this pool does not exist yet, but it will be created later in this blog series. <span class='inlinecode'>zrepl</span>, which we will use for replication, later in this blog series will manage the <span class='inlinecode'>zdata</span> snapshots.</span><br />
<br />
<h3 style='display: inline' id='uptime-tracking'>Uptime tracking</h3><br />
<br />
<span><span class='inlinecode'>uptimed</span>? I like to track my uptimes. This is how I configured the daemon:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># cp /usr/local/mimecast/etc/uptimed.conf-dist \</font></i>
  /usr/local/mimecast/etc/uptimed.conf 
root@f0:~ <i><font color="silver"># hx /usr/local/mimecast/etc/uptimed.conf</font></i>
</pre>
<br />
<span>In the Helix editor session, I changed <span class='inlinecode'>LOG_MAXIMUM_ENTRIES</span> to <span class='inlinecode'>0</span> to keep all uptime entries forever and not cut off at 50 (the default config). After that, I enabled and started <span class='inlinecode'>uptimed</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>root@f0:~ <i><font color="silver"># service uptimed enable</font></i>
root@f0:~ <i><font color="silver"># service uptimed start</font></i>
</pre>
<br />
<span>To check the current uptime stats, I can now run <span class='inlinecode'>uprecords</span>:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre> root@f0:~ <i><font color="silver"># uprecords</font></i>
     <i><font color="silver">#               Uptime | System                                     Boot up</font></i>
----------------------------+---------------------------------------------------
-&gt;   <font color="#000000">1</font>     <font color="#000000">0</font> days, <font color="#000000">00</font>:<font color="#000000">07</font>:<font color="#000000">34</font> | FreeBSD <font color="#000000">14.1</font>-RELEASE      Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>
----------------------------+---------------------------------------------------
NewRec     <font color="#000000">0</font> days, <font color="#000000">00</font>:<font color="#000000">07</font>:<font color="#000000">33</font> | since                     Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>
    up     <font color="#000000">0</font> days, <font color="#000000">00</font>:<font color="#000000">07</font>:<font color="#000000">34</font> | since                     Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>
  down     <font color="#000000">0</font> days, <font color="#000000">00</font>:<font color="#000000">00</font>:<font color="#000000">00</font> | since                     Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>
   %up              <font color="#000000">100.000</font> | since                     Mon Dec  <font color="#000000">2</font> <font color="#000000">12</font>:<font color="#000000">21</font>:<font color="#000000">44</font> <font color="#000000">2024</font>
</pre>
<br />
<span>This is how I track the uptimes for all of my host:</span><br />
<br />
<a class='textlink' href='./2023-05-01-unveiling-guprecords:-uptime-records-with-raku.html'>Unveiling <span class='inlinecode'>guprecords.raku</span>: Global Uptime Records with Raku-</a><br />
<a class='textlink' href='https://github.com/rpodgorny/uptimed'>https://github.com/rpodgorny/uptimed</a><br />
<br />
<h1 style='display: inline' id='hardware-check'>Hardware check</h1><br />
<br />
<h2 style='display: inline' id='ethernet'>Ethernet</h2><br />
<br />
<span>Works. Nothing eventful, really. It&#39;s a cheap Realtek chip, but it will do what it is supposed to do.</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % ifconfig re0
re0: flags=<font color="#000000">1008843</font>&lt;UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP&gt; metric <font color="#000000">0</font> mtu <font color="#000000">1500</font>
        options=8209b&lt;RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE&gt;
        ether e8:ff:1e:d7:1c:ac
        inet <font color="#000000">192.168</font>.<font color="#000000">1.130</font> netmask <font color="#000000">0xffffff00</font> broadcast <font color="#000000">192.168</font>.<font color="#000000">1.255</font>
        inet6 fe80::eaff:1eff:fed7:1cac%re0 prefixlen <font color="#000000">64</font> scopeid <font color="#000000">0x1</font>
        inet6 fd22:c702:acb7:<font color="#000000">0</font>:eaff:1eff:fed7:1cac prefixlen <font color="#000000">64</font> detached autoconf
        inet6 2a01:5a8:<font color="#000000">304</font>:1d5c:eaff:1eff:fed7:1cac prefixlen <font color="#000000">64</font> autoconf pltime <font color="#000000">10800</font> vltime <font color="#000000">14400</font>
        media: Ethernet autoselect (1000baseT &lt;full-duplex&gt;)
        status: active
        nd6 options=<font color="#000000">23</font>&lt;PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL&gt;
</pre>
<br />
<h2 style='display: inline' id='ram'>RAM</h2><br />
<br />
<span>All there:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % sysctl hw.physmem
hw.physmem: <font color="#000000">16902905856</font>

</pre>
<br />
<h2 style='display: inline' id='cpus'>CPUs</h2><br />
<br />
<span>They work:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % sysctl dev.cpu | grep freq:
dev.cpu.<font color="#000000">3</font>.freq: <font color="#000000">705</font>
dev.cpu.<font color="#000000">2</font>.freq: <font color="#000000">705</font>
dev.cpu.<font color="#000000">1</font>.freq: <font color="#000000">604</font>
dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">604</font>
</pre>
<br />
<h2 style='display: inline' id='cpu-throttling'>CPU throttling</h2><br />
<br />
<span>With <span class='inlinecode'>powerd</span> running, CPU freq is dowthrottled when the box isn&#39;t jam-packed. To stress it a bit, I run <span class='inlinecode'>ubench</span> to see the frequencies being unthrottled again:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>paul@f0:~ % doas pkg install ubench
paul@f0:~ % rehash <i><font color="silver"># For tcsh to find the newly installed command</font></i>
paul@f0:~ % ubench &amp;
paul@f0:~ % sysctl dev.cpu | grep freq:
dev.cpu.<font color="#000000">3</font>.freq: <font color="#000000">2922</font>
dev.cpu.<font color="#000000">2</font>.freq: <font color="#000000">2922</font>
dev.cpu.<font color="#000000">1</font>.freq: <font color="#000000">2923</font>
dev.cpu.<font color="#000000">0</font>.freq: <font color="#000000">2922</font>
</pre>
<br />
<span>Idle, all three Beelinks plus the switch consumed 26.2W. But with <span class='inlinecode'>ubench</span> stressing all the CPUs, it went up to 38.8W.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-2/watt.jpg'><img alt='Idle consumption.' title='Idle consumption.' src='./f3s-kubernetes-with-freebsd-part-2/watt.jpg' /></a><br />
<br />
<h1 style='display: inline' id='wake-on-lan-setup'>Wake-on-LAN Setup</h1><br />
<br />
<span class='quote'>Updated Sun 11 Jan 10:30:00 EET 2026</span><br />
<br />
<span>As mentioned in the hardware specs above, the Beelink S12 Pro supports Wake-on-LAN (WoL), which allows me to remotely power on the machines over the network. This is particularly useful since I don&#39;t need all three machines running 24/7, and I can save power by shutting them down when not needed and waking them up on demand.</span><br />
<br />
<span>The good news is that FreeBSD already has WoL support enabled by default on the Realtek network interface, as evidenced by the <span class='inlinecode'>WOL_MAGIC</span> option shown in the <span class='inlinecode'>ifconfig re0</span> output above (line 215).</span><br />
<br />
<h2 style='display: inline' id='setting-up-wol-on-the-laptop'>Setting up WoL on the laptop</h2><br />
<br />
<span>To wake the Beelinks from my Fedora laptop (<span class='inlinecode'>earth</span>), I installed the <span class='inlinecode'>wol</span> package:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[paul@earth]~% sudo dnf install -y wol
</pre>
<br />
<span>Next, I created a simple script (<span class='inlinecode'>~/bin/wol-f3s</span>) to wake and shutdown the machines:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><i><font color="silver">#!/bin/bash</font></i>
<i><font color="silver"># Wake-on-LAN and shutdown script for f3s cluster (f0, f1, f2)</font></i>

<i><font color="silver"># MAC addresses</font></i>
F0_MAC=<font color="#808080">"e8:ff:1e:d7:1c:ac"</font>  <i><font color="silver"># f0 (192.168.1.130)</font></i>
F1_MAC=<font color="#808080">"e8:ff:1e:d7:1e:44"</font>  <i><font color="silver"># f1 (192.168.1.131)</font></i>
F2_MAC=<font color="#808080">"e8:ff:1e:d7:1c:a0"</font>  <i><font color="silver"># f2 (192.168.1.132)</font></i>

<i><font color="silver"># IP addresses</font></i>
F0_IP=<font color="#808080">"192.168.1.130"</font>
F1_IP=<font color="#808080">"192.168.1.131"</font>
F2_IP=<font color="#808080">"192.168.1.132"</font>

<i><font color="silver"># SSH user</font></i>
SSH_USER=<font color="#808080">"paul"</font>

<i><font color="silver"># Broadcast address for your LAN</font></i>
BROADCAST=<font color="#808080">"192.168.1.255"</font>

wake() {
    <b><u><font color="#000000">local</font></u></b> name=$1
    <b><u><font color="#000000">local</font></u></b> mac=$2
    echo <font color="#808080">"Sending WoL packet to $name ($mac)..."</font>
    wol -i <font color="#808080">"$BROADCAST"</font> <font color="#808080">"$mac"</font>
}

shutdown_host() {
    <b><u><font color="#000000">local</font></u></b> name=$1
    <b><u><font color="#000000">local</font></u></b> ip=$2
    echo <font color="#808080">"Shutting down $name ($ip)..."</font>
    ssh -o ConnectTimeout=<font color="#000000">5</font> <font color="#808080">"$SSH_USER@$ip"</font> <font color="#808080">"doas poweroff"</font> <font color="#000000">2</font>&gt;/dev/null &amp;&amp; \
        echo <font color="#808080">"  ✓ Shutdown command sent to $name"</font> || \
        echo <font color="#808080">"  ✗ Failed to reach $name (already down?)"</font>
}

ACTION=<font color="#808080">"${1:-all}"</font>

<b><u><font color="#000000">case</font></u></b> <font color="#808080">"$ACTION"</font> <b><u><font color="#000000">in</font></u></b>
    f0) wake <font color="#808080">"f0"</font> <font color="#808080">"$F0_MAC"</font> ;;
    f1) wake <font color="#808080">"f1"</font> <font color="#808080">"$F1_MAC"</font> ;;
    f2) wake <font color="#808080">"f2"</font> <font color="#808080">"$F2_MAC"</font> ;;
    all|<font color="#808080">""</font>)
        wake <font color="#808080">"f0"</font> <font color="#808080">"$F0_MAC"</font>
        wake <font color="#808080">"f1"</font> <font color="#808080">"$F1_MAC"</font>
        wake <font color="#808080">"f2"</font> <font color="#808080">"$F2_MAC"</font>
        ;;
    shutdown|poweroff|down)
        shutdown_host <font color="#808080">"f0"</font> <font color="#808080">"$F0_IP"</font>
        shutdown_host <font color="#808080">"f1"</font> <font color="#808080">"$F1_IP"</font>
        shutdown_host <font color="#808080">"f2"</font> <font color="#808080">"$F2_IP"</font>
        echo <font color="#808080">""</font>
        echo <font color="#808080">"✓ Shutdown commands sent to all machines."</font>
        <b><u><font color="#000000">exit</font></u></b> <font color="#000000">0</font>
        ;;
    *)
        echo <font color="#808080">"Usage: $0 [f0|f1|f2|all|shutdown]"</font>
        <b><u><font color="#000000">exit</font></u></b> <font color="#000000">1</font>
        ;;
<b><u><font color="#000000">esac</font></u></b>

echo <font color="#808080">""</font>
echo <font color="#808080">"✓ WoL packets sent. Machines should boot in a few seconds."</font>
</pre>
<br />
<span>After making the script executable with <span class='inlinecode'>chmod +x ~/bin/wol-f3s</span>, I can now control the machines with simple commands:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[paul@earth]~% wol-f3s          <i><font color="silver"># Wake all three</font></i>
[paul@earth]~% wol-f3s f0       <i><font color="silver"># Wake only f0</font></i>
[paul@earth]~% wol-f3s shutdown <i><font color="silver"># Shutdown all three via SSH</font></i>
</pre>
<br />
<h2 style='display: inline' id='testing-wol-and-shutdown'>Testing WoL and Shutdown</h2><br />
<br />
<span>To test the setup, I shutdown all three machines using the script&#39;s shutdown function:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[paul@earth]~% wol-f3s shutdown
Shutting down f0 (<font color="#000000">192.168</font>.<font color="#000000">1.130</font>)...
  ✓ Shutdown <b><u><font color="#000000">command</font></u></b> sent to f0
Shutting down f1 (<font color="#000000">192.168</font>.<font color="#000000">1.131</font>)...
  ✓ Shutdown <b><u><font color="#000000">command</font></u></b> sent to f1
Shutting down f2 (<font color="#000000">192.168</font>.<font color="#000000">1.132</font>)...
  ✓ Shutdown <b><u><font color="#000000">command</font></u></b> sent to f2

✓ Shutdown commands sent to all machines.
</pre>
<br />
<span>After waiting for them to fully power down (about 1 minute), I sent the WoL magic packets:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre>[paul@earth]~% wol-f3s
Sending WoL packet to f0 (e8:ff:1e:d7:1c:ac)...
Waking up e8:ff:1e:d7:1c:ac...
Sending WoL packet to f1 (e8:ff:1e:d7:1e:<font color="#000000">44</font>)...
Waking up e8:ff:1e:d7:1e:<font color="#000000">44</font>...
Sending WoL packet to f2 (e8:ff:1e:d7:1c:a0)...
Waking up e8:ff:1e:d7:1c:a0...

✓ WoL packets sent. Machines should boot <b><u><font color="#000000">in</font></u></b> a few seconds.
</pre>
<br />
<span>Within 30-50 seconds, all three machines successfully booted up and became accessible via SSH!</span><br />
<br />
<span>This also works fine over WiFi, by the way — as long as the laptop and the Beelinks are on the same local network, the router bridges everything. And <span class='inlinecode'>wol-f3s shutdown</span> does the reverse (SSH + <span class='inlinecode'>doas poweroff</span>), so I can spin the whole cluster up and down pretty quickly.</span><br />
<br />
<h2 style='display: inline' id='bios-configuration'>BIOS Configuration</h2><br />
<br />
<span>For WoL to work reliably, make sure to check the BIOS settings on each Beelink:</span><br />
<br />
<ul>
<li>Enable "Wake on LAN" (usually under Power Management)</li>
<li>Disable "ERP Support" or "ErP Ready" (this can prevent WoL from working)</li>
<li>Enable "Power on by PCI-E" or "Wake on PCI-E"</li>
</ul><br />
<span>The exact menu names vary, but these settings are typically found in the Power Management or Advanced sections of the BIOS.</span><br />
<br />
<h1 style='display: inline' id='conclusion'>Conclusion</h1><br />
<br />
<span>Honestly, the Beelink S12 Pro with the N100 is kind of perfect for this — tiny, cheap, sips power, and runs both Linux and FreeBSD without drama. I&#39;m pretty happy with it.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-2/3beelinks.jpg'><img alt='Beelinks stacked' title='Beelinks stacked' src='./f3s-kubernetes-with-freebsd-part-2/3beelinks.jpg' /></a><br />
<br />
<span>To ease cable management, I need to get shorter ethernet cables. I will place the tower on my shelf, where most of the cables will be hidden (together with a UPS, which will also be added to the setup).</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation (You are currently reading this)</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</title>
        <link href="https://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html" />
        <id>https://foo.zone/gemfeed/2024-11-17-f3s-kubernetes-with-freebsd-part-1.html</id>
        <updated>2024-11-16T23:20:14+02:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='f3s-kubernetes-with-freebsd---part-1-setting-the-stage'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</h1><br />
<br />
<span class='quote'>Published at 2024-11-16T23:20:14+02:00</span><br />
<br />
<span>This is the first blog post about my f3s series for my self-hosting demands in my home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I will use on FreeBSD-based physical machines.</span><br />
<br />
<span>I will post a new entry every month or so (there are too many other side projects for more frequent updates—I bet you can understand).</span><br />
<br />
<span>These are all the posts so far:</span><br />
<br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png'><img alt='f3s logo' title='f3s logo' src='./f3s-kubernetes-with-freebsd-part-1/f3slogo.png' /></a><br />
<br />
<span class='quote'>ChatGPT generated logo..</span><br />
<br />
<span>Let&#39;s begin...</span><br />
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#f3s-kubernetes-with-freebsd---part-1-setting-the-stage'>f3s: Kubernetes with FreeBSD - Part 1: Setting the stage</a></li>
<li>⇢ <a href='#why-this-setup'>Why this setup?</a></li>
<li>⇢ <a href='#the-infrastructure'>The infrastructure</a></li>
<li>⇢ ⇢ <a href='#physical-freebsd-nodes-and-linux-vms'>Physical FreeBSD nodes and Linux VMs</a></li>
<li>⇢ ⇢ <a href='#kubernetes-with-k3s-'>Kubernetes with k3s </a></li>
<li>⇢ ⇢ <a href='#ha-volumes-for-k3s-with-hastzfs-and-nfs'>HA volumes for k3s with HAST/ZFS and NFS</a></li>
<li>⇢ ⇢ <a href='#openbsdrelayd-to-the-rescue-for-external-connectivity'>OpenBSD/<span class='inlinecode'>relayd</span> to the rescue for external connectivity</a></li>
<li>⇢ <a href='#data-integrity'>Data integrity</a></li>
<li>⇢ ⇢ <a href='#periodic-backups'>Periodic backups</a></li>
<li>⇢ ⇢ <a href='#power-protection'>Power protection</a></li>
<li>⇢ <a href='#monitoring-keeping-an-eye-on-everything'>Monitoring: Keeping an eye on everything</a></li>
<li>⇢ ⇢ <a href='#prometheus-and-grafana'>Prometheus and Grafana</a></li>
<li>⇢ ⇢ <a href='#gogios-my-custom-alerting-system'>Gogios: My custom alerting system</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
</ul><br />
<h2 style='display: inline' id='why-this-setup'>Why this setup?</h2><br />
<br />
<span>My previous setup was great for learning Terraform and AWS, but it is too expensive. Costs are under control there, but only because I am shutting down all containers after use (so they are offline ninety percent of the time and still cost around $20 monthly). With the new setup, I could run all containers 24/7 at home, which would still be cheaper in terms of electricity consumption. I have a 400 MBit/s uplink (I could have more if I wanted, but it is more than plenty for my use case already).</span><br />
<br />
<a class='textlink' href='./2024-02-04-from-babylon5.buetow.org-to-.cloud.html'>From <span class='inlinecode'>babylon5.buetow.org</span> to <span class='inlinecode'>.cloud</span></a><br />
<br />
<span>Migrating off all my containers from AWS ECS means I need a reliable and scalable environment to host my workloads. I wanted something:</span><br />
<br />
<ul>
<li>To self-host all my open-source apps (Docker containers).</li>
<li>Fully under my control (goodbye cloud vendor lock-in).</li>
<li>Secure and redundant.</li>
<li>Cost-efficient (after the initial hardware investment).</li>
<li>Something I can poke around with and also pick up new skills.</li>
</ul><br />
<h2 style='display: inline' id='the-infrastructure'>The infrastructure</h2><br />
<br />
<span>This is still in progress, and I need to own the hardware. But in this first part of the blog series, I will outline what I intend to do.</span><br />
<br />
<a href='./f3s-kubernetes-with-freebsd-part-1/diagram.png'><img alt='Diagram' title='Diagram' src='./f3s-kubernetes-with-freebsd-part-1/diagram.png' /></a><br />
<br />
<h3 style='display: inline' id='physical-freebsd-nodes-and-linux-vms'>Physical FreeBSD nodes and Linux VMs</h3><br />
<br />
<span>The setup starts with three physical FreeBSD nodes deployed into my home LAN. On these, I&#39;m going to run Rocky Linux virtual machines with bhyve. Why Linux VMs in FreeBSD and not Linux directly? I want to leverage the great ZFS integration in FreeBSD (among other features), and I have been using FreeBSD for a while in my home lab. And with bhyve, there is a very performant hypervisor available which makes the Linux VMs de-facto run at native speed (another use case of mine would be maybe running a Windows bhyve VM on one of the nodes - but out of scope for this blog series).</span><br />
<br />
<a class='textlink' href='https://www.freebsd.org/'>https://www.freebsd.org/</a><br />
<a class='textlink' href='https://wiki.freebsd.org/bhyve'>https://wiki.freebsd.org/bhyve</a><br />
<br />
<span>I selected Rocky Linux because it comes with long-term support (I don&#39;t want to upgrade the VMs every 6 months). Rocky Linux 9 will reach its end of life in 2032, which is plenty of time! Of course, there will be minor upgrades, but nothing will significantly break my setup.</span><br />
<br />
<a class='textlink' href='https://rockylinux.org/'>https://rockylinux.org/</a><br />
<a class='textlink' href='https://wiki.rockylinux.org/rocky/version/'>https://wiki.rockylinux.org/rocky/version/</a><br />
<br />
<span>Furthermore, I am already using "RHEL-family" related distros at work and Fedora on my main personal laptop. Rocky Linux belongs to the same type of Linux distribution family, so I already feel at home here. I also used Rocky 9 before I switched to AWS ECS. Now, I am switching back in one sense or another ;-)</span><br />
<br />
<h3 style='display: inline' id='kubernetes-with-k3s-'>Kubernetes with k3s </h3><br />
<br />
<span>These Linux VMs form a three-node k3s Kubernetes cluster, where my containers will reside moving forward. The 3-node k3s cluster will be highly available (in <span class='inlinecode'>etcd</span> mode), and all apps will probably be deployed with Helm. Prometheus will also be running in k3s, collecting time-series metrics and handling monitoring. Additionally, a private Docker registry will be deployed into the k3s cluster, where I will store some of my self-created Docker images. k3s is the perfect distribution of Kubernetes for homelabbers due to its simplicity and the inclusion of the most useful features out of the box!</span><br />
<br />
<a class='textlink' href='https://k3s.io/'>https://k3s.io/</a><br />
<br />
<h3 style='display: inline' id='ha-volumes-for-k3s-with-hastzfs-and-nfs'>HA volumes for k3s with HAST/ZFS and NFS</h3><br />
<br />
<span>Persistent storage for the k3s cluster will be handled by highly available (HA) NFS shares backed by ZFS on the FreeBSD hosts. </span><br />
<br />
<span>On two of the three physical FreeBSD nodes, I will add a second SSD drive to each and dedicate it to a <span class='inlinecode'>zhast</span> ZFS pool. With HAST (FreeBSD&#39;s solution for highly available storage), this <span class='inlinecode'>pool</span> will be replicated at the byte level to a standby node.</span><br />
<br />
<span>A virtual IP (VIP) will point to the master node. When the master node goes down, the VIP will failover to the standby node, where the ZFS pool will be mounted. An NFS server will listen to both nodes. k3s will use the VIP to access the NFS shares.</span><br />
<br />
<a class='textlink' href='https://wiki.freebsd.org/HighlyAvailableStorage'>FreeBSD Wiki: Highly Available Storage</a><br />
<br />
<span>You can think of DRBD being the Linux equivalent to FreeBSD&#39;s HAST.</span><br />
<br />
<h3 style='display: inline' id='openbsdrelayd-to-the-rescue-for-external-connectivity'>OpenBSD/<span class='inlinecode'>relayd</span> to the rescue for external connectivity</h3><br />
<br />
<span>All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I&#39;ve got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let&#39;s Encrypt certificates. </span><br />
<br />
<span>All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).</span><br />
<br />
<a class='textlink' href='https://en.wikipedia.org/wiki/WireGuard'>https://en.wikipedia.org/wiki/WireGuard</a><br />
<br />
<span>So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the <span class='inlinecode'>relayd</span> process (with a Let&#39;s Encrypt certificate—see my Let&#39;s Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.</span><br />
<br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<br />
<span>The OpenBSD setup described here already exists and is ready to use. The only thing that does not yet exist is the configuration of <span class='inlinecode'>relayd</span> to forward requests to k3s through the WireGuard tunnel(s).</span><br />
<br />
<h2 style='display: inline' id='data-integrity'>Data integrity</h2><br />
<br />
<h3 style='display: inline' id='periodic-backups'>Periodic backups</h3><br />
<br />
<span>Let&#39;s face it, backups are non-negotiable. </span><br />
<br />
<span>On the HAST master node, incremental and encrypted ZFS snapshots are created daily and automatically backed up to AWS S3 Glacier Deep Archive via CRON. I have a bunch of scripts already available, which I currently use for a similar purpose on my FreeBSD Home NAS server (an old ThinkPad T440 with an external USB drive enclosure, which I will eventually retire when the HAST setup is ready). I will copy them and slightly modify them to fit the purpose.</span><br />
<br />
<span>There&#39;s also <span class='inlinecode'>zfstools</span> in the ports, which helps set up an automatic snapshot regime:</span><br />
<br />
<a class='textlink' href='https://www.freshports.org/sysutils/zfstools'>https://www.freshports.org/sysutils/zfstools</a><br />
<br />
<span>The backup scripts also perform some zpool scrubbing now and then. A scrub once in a while keeps the trouble away.</span><br />
<br />
<h3 style='display: inline' id='power-protection'>Power protection</h3><br />
<br />
<span>Power outages are regularly in my area, so a UPS keeps the infrastructure running during short outages and protects the hardware. I&#39;m still trying to decide which hardware to get, and I still need one, as my previous NAS is simply an older laptop that already has a battery for power outages. However, there are plenty of options to choose from. My main criterion is that the UPS should be silent, as the whole setup will be installed in an upper shelf unit in my daughter&#39;s room. ;-)</span><br />
<br />
<h2 style='display: inline' id='monitoring-keeping-an-eye-on-everything'>Monitoring: Keeping an eye on everything</h2><br />
<br />
<span>I want to know when stuff breaks (ideally before it breaks), so monitoring is a big part of the plan.</span><br />
<br />
<h3 style='display: inline' id='prometheus-and-grafana'>Prometheus and Grafana</h3><br />
<br />
<span>Inside the k3s cluster, Prometheus will be deployed to handle metrics collection. It will be configured to scrape data from my Kubernetes workloads, nodes, and any services I monitor. Prometheus also integrates with Alertmanager to generate alerts based on predefined thresholds or conditions.</span><br />
<br />
<a class='textlink' href='https://prometheus.io'>https://prometheus.io</a><br />
<br />
<span>For visualization, Grafana will be deployed alongside Prometheus. I mostly just want dashboards for CPU, memory, and pod health — the usual stuff. Makes it way easier to figure out what&#39;s going wrong when something inevitably does.</span><br />
<br />
<a class='textlink' href='https://grafana.com'>https://grafana.com</a><br />
<br />
<h3 style='display: inline' id='gogios-my-custom-alerting-system'>Gogios: My custom alerting system</h3><br />
<br />
<span>Alerts generated by Prometheus are forwarded to Alertmanager, which I will configure to work with Gogios, a lightweight monitoring and alerting system I wrote myself. Gogios runs on one of my OpenBSD VMs. At regular intervals, Gogios scrapes the alerts generated in the k3s cluster and notifies me via Email.</span><br />
<br />
<a class='textlink' href='./2023-06-01-kiss-server-monitoring-with-gogios.html'>KISS server monitoring with Gogios</a><br />
<br />
<span>Ironically, I implemented Gogios to avoid using more complex alerting systems like Prometheus, but here we go—it integrates well now.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>This setup may be just the beginning. Some ideas I&#39;m thinking about for the future:</span><br />
<br />
<ul>
<li>Adding more FreeBSD nodes (in different physical locations, maybe at my wider family&#39;s places? WireGuard would make it possible!) for better redundancy. (HA storage then might be trickier)</li>
<li>Deploying more Docker apps (data-intensive ones, like a picture gallery, my entire audiobook catalogue, or even a music server) to k3s.</li>
</ul><br />
<span>For now, though, I&#39;m focused on completing the migration from AWS ECS and getting all my Docker containers running smoothly in k3s.</span><br />
<br />
<span>Anyway, stay tuned — in part 2 I&#39;ll probably get into the hardware and OS setup.</span><br />
<br />
<span>Read the next post of this series:</span><br />
<br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<br />
<span>Other *BSD-related posts:</span><br />
<br />
<a class='textlink' href='./2026-04-02-f3s-kubernetes-with-freebsd-part-9.html'>2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD</a><br />
<a class='textlink' href='./2025-12-14-f3s-kubernetes-with-freebsd-part-8b.html'>2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo</a><br />
<a class='textlink' href='./2025-12-07-f3s-kubernetes-with-freebsd-part-8.html'>2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability</a><br />
<a class='textlink' href='./2025-10-02-f3s-kubernetes-with-freebsd-part-7.html'>2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments</a><br />
<a class='textlink' href='./2025-07-14-f3s-kubernetes-with-freebsd-part-6.html'>2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage</a><br />
<a class='textlink' href='./2025-05-11-f3s-kubernetes-with-freebsd-part-5.html'>2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network</a><br />
<a class='textlink' href='./2025-04-05-f3s-kubernetes-with-freebsd-part-4.html'>2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs</a><br />
<a class='textlink' href='./2025-02-01-f3s-kubernetes-with-freebsd-part-3.html'>2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts</a><br />
<a class='textlink' href='./2024-12-03-f3s-kubernetes-with-freebsd-part-2.html'>2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation</a><br />
<a class='textlink' href='./2024-11-17-f3s-kubernetes-with-freebsd-part-1.html'>2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage (You are currently reading this)</a><br />
<a class='textlink' href='./2024-04-01-KISS-high-availability-with-OpenBSD.html'>2024-04-01 KISS high-availability with OpenBSD</a><br />
<a class='textlink' href='./2024-01-13-one-reason-why-i-love-openbsd.html'>2024-01-13 One reason why I love OpenBSD</a><br />
<a class='textlink' href='./2022-10-30-installing-dtail-on-openbsd.html'>2022-10-30 Installing DTail on OpenBSD</a><br />
<a class='textlink' href='./2022-07-30-lets-encrypt-with-openbsd-and-rex.html'>2022-07-30 Let&#39;s Encrypt with OpenBSD and Rex</a><br />
<a class='textlink' href='./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.html'>2016-04-09 Jails and ZFS with Puppet on FreeBSD</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>'Staff Engineer' book notes</title>
        <link href="https://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html" />
        <id>https://foo.zone/gemfeed/2024-10-24-staff-engineer-book-notes.html</id>
        <updated>2024-10-24T20:57:44+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>These are my personal takeaways after reading 'Staff Engineer' by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='staff-engineer-book-notes'>"Staff Engineer" book notes</h1><br />
<br />
<span class='quote'>Published at 2024-10-24T20:57:44+03:00</span><br />
<br />
<span>These are my personal takeaways after reading "Staff Engineer" by Will Larson. Note that the book contains much more knowledge wisdom and that these notes only contain points I personally found worth writing down. This is mainly for my own use, but you might find it helpful too.</span><br />
<br />
<pre>
         ,..........   ..........,
     ,..,&#39;          &#39;.&#39;          &#39;,..,
    ,&#39; ,&#39;            :            &#39;, &#39;,
   ,&#39; ,&#39;             :             &#39;, &#39;,
  ,&#39; ,&#39;              :              &#39;, &#39;,
 ,&#39; ,&#39;............., : ,.............&#39;, &#39;,
,&#39;  &#39;............   &#39;.&#39;   ............&#39;  &#39;,
 &#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;;&#39;&#39;&#39;;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;&#39;
                    &#39;&#39;&#39;
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#staff-engineer-book-notes'>"Staff Engineer" book notes</a></li>
<li>⇢ <a href='#the-four-archetypes-of-a-staff-engineer'>The Four Archetypes of a Staff Engineer</a></li>
<li>⇢ <a href='#influence-and-impact-over-authority'>Influence and Impact over Authority</a></li>
<li>⇢ <a href='#breadth-and-depth-of-knowledge'>Breadth and Depth of Knowledge</a></li>
<li>⇢ <a href='#mentorship-and-sponsorship'>Mentorship and Sponsorship</a></li>
<li>⇢ <a href='#managing-up-and-across'>Managing Up and Across</a></li>
<li>⇢ <a href='#strategic-thinking'>Strategic Thinking</a></li>
<li>⇢ <a href='#emotional-intelligence'>Emotional Intelligence</a></li>
<li>⇢ <a href='#navigating-ambiguity'>Navigating Ambiguity</a></li>
<li>⇢ <a href='#visible-and-invisible-work'>Visible and Invisible Work</a></li>
<li>⇢ <a href='#scaling-yourself'>Scaling Yourself</a></li>
<li>⇢ <a href='#career-progression-and-title-inflation'>Career Progression and Title Inflation</a></li>
<li>⇢ <a href='#not-a-faster-senior-engineer'>Not a faster Senior Engineer</a></li>
<li>⇢ <a href='#the-balance'>The Balance</a></li>
<li>⇢ <a href='#more-things'>More things</a></li>
</ul><br />
<h2 style='display: inline' id='the-four-archetypes-of-a-staff-engineer'>The Four Archetypes of a Staff Engineer</h2><br />
<br />
<span>Larson defines four archetypes. You&#39;ll probably recognize yourself in one (or a mix):</span><br />
<br />
<ul>
<li>Tech Lead: You own the technical direction of a team. Architecture, quality, keeping everyone aligned.</li>
<li>Solver: You get thrown at the hard cross-team problems. Basically a firefighter for gnarly stuff.</li>
<li>Architect: Long-term technical vision. Standards, system design, things that need to last.</li>
<li>Right Hand: Trusted technical advisor to leadership. Strategy, org politics, the stuff nobody else wants to touch.</li>
</ul><br />
<h2 style='display: inline' id='influence-and-impact-over-authority'>Influence and Impact over Authority</h2><br />
<br />
<span>You won&#39;t have direct authority over most people or teams you work with. Influence is the actual tool here. You have to persuade, align, sometimes just nudge people in the right direction. No one reports to you, but you still need to drive outcomes.</span><br />
<br />
<h2 style='display: inline' id='breadth-and-depth-of-knowledge'>Breadth and Depth of Knowledge</h2><br />
<br />
<span>You need to know a bit about a lot of things (infra, security, product, etc.) but still be able to go deep in a few areas. The tricky part is keeping that breadth current without spreading yourself too thin.</span><br />
<br />
<h2 style='display: inline' id='mentorship-and-sponsorship'>Mentorship and Sponsorship</h2><br />
<br />
<span>Mentoring is obvious -- help people grow technically and career-wise. But sponsorship is the one that surprised me: actively advocating for people, creating opportunities for them, pushing them forward. It&#39;s not just answering questions, it&#39;s putting your reputation behind someone.</span><br />
<br />
<h2 style='display: inline' id='managing-up-and-across'>Managing Up and Across</h2><br />
<br />
<span>You have to manage up (set expectations with leadership, advocate for technical needs) and across (work with peer teams, build alignment). Basically a lot of communication and relationship building. Easy to underestimate this one.</span><br />
<br />
<h2 style='display: inline' id='strategic-thinking'>Strategic Thinking</h2><br />
<br />
<span>Senior engineers focus on execution. Staff engineers need to think about what happens months or years from now. That means sometimes pushing back on short-term pressures in favor of longer-term architectural decisions. Not always a popular move.</span><br />
<br />
<h2 style='display: inline' id='emotional-intelligence'>Emotional Intelligence</h2><br />
<br />
<span>The higher you go, the more soft skills matter. Building relationships, resolving conflicts, reading the room. I think this catches a lot of engineers off guard -- you can&#39;t just be the smartest person technically anymore.</span><br />
<br />
<h2 style='display: inline' id='navigating-ambiguity'>Navigating Ambiguity</h2><br />
<br />
<span>A lot of the problems you deal with are poorly defined. Nobody knows exactly what the problem is, let alone the solution. You have to be comfortable operating in that fog and still making progress.</span><br />
<br />
<h2 style='display: inline' id='visible-and-invisible-work'>Visible and Invisible Work</h2><br />
<br />
<span>A huge chunk of Staff Engineer work is invisible. Aligning teams, influencing decisions, resolving conflicts -- none of that shows up as commits. Larson says you need to get comfortable with that, which I think is genuinely hard for engineers who are used to shipping things.</span><br />
<br />
<h2 style='display: inline' id='scaling-yourself'>Scaling Yourself</h2><br />
<br />
<span>You can&#39;t do everything yourself anymore. Write things down, build repeatable processes, mentor others, automate what you can. The goal is to make teams more effective even when you&#39;re not in the room.</span><br />
<br />
<h2 style='display: inline' id='career-progression-and-title-inflation'>Career Progression and Title Inflation</h2><br />
<br />
<span>"Staff Engineer" means wildly different things at different companies. Titles don&#39;t always match actual responsibility or skill. Focus on the work and impact, not the title.</span><br />
<br />
<span>Some of the above is less about technical chops and more about the strategic and interpersonal side of things. Anyway, here are some more concrete takeaways:</span><br />
<br />
<h2 style='display: inline' id='not-a-faster-senior-engineer'>Not a faster Senior Engineer</h2><br />
<br />
<ul>
<li>A Staff engineer is more than just a faster Senior.</li>
<li>A staff engineer is not a senior engineer but a bit better.</li>
</ul><br />
<span>It&#39;s important to know what work or which role most energizes you. A Staff engineer is not a more senior engineer. A Staff engineer also fits into another archetype.</span><br />
<br />
<span>As a staff engineer, you are always expected to go beyond your comfort zone and learn new things.</span><br />
<br />
<span>Your job sometimes will feel like an SEM and sometimes strangely similar to your senior roles.</span><br />
<br />
<span>A Staff engineer is, like a Manager, a leader. However, being a Manager is a specific job. Leaders can apply to any job, especially to Staff engineers.</span><br />
<br />
<h2 style='display: inline' id='the-balance'>The Balance</h2><br />
<br />
<span>The more senior you become, the more responsibility you will have to cope with them in less time. Balance your speed of progress with your personal life, don&#39;t work late hours and don&#39;t skip these personal care events.</span><br />
<br />
<span>Do fewer things but do them better. Everything done will accelerate the organization. Everything else will drag it down—quality over quantity.</span><br />
<br />
<span>Don&#39;t work at ten things and progress slowly; focus on one thing and finish it.</span><br />
<br />
<span>Only spend some of the time firefighting. Have time for deep thinking. Only deep think some of the time. Otherwise, you lose touch with reality.</span><br />
<br />
<span>Sebactical: Take at least six months. Otherwise, it won&#39;t be as restored.</span><br />
<br />
<h2 style='display: inline' id='more-things'>More things</h2><br />
<br />
<ul>
<li>Provide simple but widely used tools. Complex and powerful tools will have power users but only a very few. All others will not use the tool.</li>
<li>In meetings, when someone is inactive, try to pull him in. Pull in max one person at a time. Don&#39;t open the discussion to multiple people.</li>
<li>Get used to writing things down and repeating yourself. You will scale yourself much more.</li>
<li>Title inflation: skills correspond to work, but the titles don&#39;t.</li>
</ul><br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other book notes of mine are:</span><br />
<br />
<a class='textlink' href='./2025-11-02-the-courage-to-be-disliked-book-notes.html'>2025-11-02 &#39;The Courage To Be Disliked&#39; book notes</a><br />
<a class='textlink' href='./2025-06-07-a-monks-guide-to-happiness-book-notes.html'>2025-06-07 &#39;A Monk&#39;s Guide to Happiness&#39; book notes</a><br />
<a class='textlink' href='./2025-04-19-when-book-notes.html'>2025-04-19 &#39;When: The Scientific Secrets of Perfect Timing&#39; book notes</a><br />
<a class='textlink' href='./2024-10-24-staff-engineer-book-notes.html'>2024-10-24 &#39;Staff Engineer&#39; book notes (You are currently reading this)</a><br />
<a class='textlink' href='./2024-07-07-the-stoic-challenge-book-notes.html'>2024-07-07 &#39;The Stoic Challenge&#39; book notes</a><br />
<a class='textlink' href='./2024-05-01-slow-productivity-book-notes.html'>2024-05-01 &#39;Slow Productivity&#39; book notes</a><br />
<a class='textlink' href='./2023-11-11-mind-management-book-notes.html'>2023-11-11 &#39;Mind Management&#39; book notes</a><br />
<a class='textlink' href='./2023-07-17-career-guide-and-soft-skills-book-notes.html'>2023-07-17 &#39;Software Developers Career Guide and Soft Skills&#39; book notes</a><br />
<a class='textlink' href='./2023-05-06-the-obstacle-is-the-way-book-notes.html'>2023-05-06 &#39;The Obstacle is the Way&#39; book notes</a><br />
<a class='textlink' href='./2023-04-01-never-split-the-difference-book-notes.html'>2023-04-01 &#39;Never split the difference&#39; book notes</a><br />
<a class='textlink' href='./2023-03-16-the-pragmatic-programmer-book-notes.html'>2023-03-16 &#39;The Pragmatic Programmer&#39; book notes</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Gemtexter 3.0.0 - Let's Gemtext again⁴</title>
        <link href="https://foo.zone/gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.html" />
        <id>https://foo.zone/gemfeed/2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.html</id>
        <updated>2024-10-01T21:46:26+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>I proudly announce that I've released Gemtexter version `3.0.0`. What is Gemtexter? It's my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='gemtexter-300---let-s-gemtext-again'>Gemtexter 3.0.0 - Let&#39;s Gemtext again⁴</h1><br />
<br />
<span class='quote'>Published at 2024-10-01T21:46:26+03:00</span><br />
<br />
<span>I proudly announce that I&#39;ve released Gemtexter version <span class='inlinecode'>3.0.0</span>. What is Gemtexter? It&#39;s my minimalist static site generator for Gemini Gemtext, HTML and Markdown, written in GNU Bash.</span><br />
<br />
<a class='textlink' href='https://codeberg.org/snonux/gemtexter'>https://codeberg.org/snonux/gemtexter</a><br />
<br />
<pre>
-=[ typewriters ]=-  1/98
                                      .-------.
       .-------.                     _|~~ ~~  |_
      _|~~ ~~  |_       .-------.  =(_|_______|_)
    =(_|_______|_)=    _|~~ ~~  |_   |:::::::::|    .-------.
      |:::::::::|    =(_|_______|_)  |:::::::[]|   _|~~ ~~  |_
      |:::::::[]|      |:::::::::|   |o=======.| =(_|_______|_)
      |o=======.|      |:::::::[]|   `"""""""""`   |:::::::::|
 jgs  `"""""""""`      |o=======.|                 |:::::::[]|
  mod. by Paul Buetow  `"""""""""`                 |o=======.|
                                                   `"""""""""`
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#gemtexter-300---let-s-gemtext-again'>Gemtexter 3.0.0 - Let&#39;s Gemtext again⁴</a></li>
<li>⇢ <a href='#why-bash'>Why Bash?</a></li>
<li>⇢ <a href='#html-exact-variant-is-the-only-variant'>HTML exact variant is the only variant</a></li>
<li>⇢ <a href='#table-of-contents-auto-generation'>Table of Contents auto-generation</a></li>
<li>⇢ <a href='#configurable-themes'>Configurable themes</a></li>
<li>⇢ <a href='#no-use-of-webfonts-by-default'>No use of webfonts by default</a></li>
<li>⇢ <a href='#more'>More</a></li>
</ul><br />
<h2 style='display: inline' id='why-bash'>Why Bash?</h2><br />
<br />
<span>This project is too complex for a Bash script. Writing it in Bash was to try out how maintainable a "larger" Bash script could be. It&#39;s still pretty maintainable and helps me try new Bash tricks here and then!</span><br />
<br />
<span>Let&#39;s list what&#39;s new!</span><br />
<br />
<h2 style='display: inline' id='html-exact-variant-is-the-only-variant'>HTML exact variant is the only variant</h2><br />
<br />
<span>The last version of Gemtexter introduced the HTML exact variant, which wasn&#39;t enabled by default. This version of Gemtexter removes the previous (inexact) variant and makes the exact variant the default. This is a breaking change, which is why there is a major version bump of Gemtexter. Here is a reminder of what the exact variant was:</span><br />
<br />
<span class='quote'>Gemtexter is there to convert your Gemini Capsule into other formats, such as HTML and Markdown. An HTML exact variant can now be enabled in the <span class='inlinecode'>gemtexter.conf</span> by adding the line <span class='inlinecode'>declare -rx HTML_VARIANT=exact</span>. The HTML/CSS output changed to reflect a more exact Gemtext appearance and to respect the same spacing as you would see in the Geminispace. </span><br />
<br />
<h2 style='display: inline' id='table-of-contents-auto-generation'>Table of Contents auto-generation</h2><br />
<br />
<span>Just add...</span><br />
<br />
<pre>
 &lt;&lt; template::inline::toc
</pre>
<br />
<span>...into a Gemtexter template file and Gemtexter will automatically generate a table of contents for the page based on the headings (see this page&#39;s ToC for example). The ToC will also have links to the relevant sections in HTML and Markdown output. The Gemtext format does not support links, so the ToC will simply be displayed as a bullet list. </span><br />
<br />
<h2 style='display: inline' id='configurable-themes'>Configurable themes</h2><br />
<br />
<span>It was always possible to customize the style of a Gemtexter&#39;s resulting HTML page, but all the config options were scattered across multiple files. Now, the CSS style, web fonts, etc., are all configurable via themes.</span><br />
<br />
<span>Simply configure <span class='inlinecode'>HTML_THEME_DIR</span> in the <span class='inlinecode'>gemtexter.conf</span> file to the corresponding directory. For example:</span><br />
<br />
<!-- Generator: GNU source-highlight 3.1.9
by Lorenzo Bettini
http://www.lorenzobettini.it
http://www.gnu.org/software/src-highlite -->
<pre><b><u><font color="#000000">declare</font></u></b> -xr HTML_THEME_DIR=./extras/html/themes/simple
</pre>
<br />
<span>To customize the theme or create your own, simply copy the theme directory and modify it as needed. This makes it also much easier to switch between layouts.</span><br />
<br />
<h2 style='display: inline' id='no-use-of-webfonts-by-default'>No use of webfonts by default</h2><br />
<br />
<span>The default theme is now "back to the basics" and does not utilize any web fonts. The previous themes are still part of the release and can be easily configured. These are currently the <span class='inlinecode'>future</span> and <span class='inlinecode'>business</span> themes. You can check them out from the themes directory.</span><br />
<br />
<h2 style='display: inline' id='more'>More</h2><br />
<br />
<span>Additionally, there were a couple of bug fixes, refactorings and overall improvements in the documentation made. </span><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<span>Other related posts are:</span><br />
<br />
<a class='textlink' href='./2024-10-02-gemtexter-3.0.0-lets-gemtext-again-4.html'>2024-10-02 Gemtexter 3.0.0 - Let&#39;s Gemtext again⁴ (You are currently reading this)</a><br />
<a class='textlink' href='./2023-07-21-gemtexter-2.1.0-lets-gemtext-again-3.html'>2023-07-21 Gemtexter 2.1.0 - Let&#39;s Gemtext again³</a><br />
<a class='textlink' href='./2023-03-25-gemtexter-2.0.0-lets-gemtext-again-2.html'>2023-03-25 Gemtexter 2.0.0 - Let&#39;s Gemtext again²</a><br />
<a class='textlink' href='./2022-08-27-gemtexter-1.1.0-lets-gemtext-again.html'>2022-08-27 Gemtexter 1.1.0 - Let&#39;s Gemtext again</a><br />
<a class='textlink' href='./2021-06-05-gemtexter-one-bash-script-to-rule-it-all.html'>2021-06-05 Gemtexter - One Bash script to rule it all</a><br />
<a class='textlink' href='./2021-04-24-welcome-to-the-geminispace.html'>2021-04-24 Welcome to the Geminispace</a><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers</title>
        <link href="https://foo.zone/gemfeed/2024-09-07-site-reliability-engineering-part-4.html" />
        <id>https://foo.zone/gemfeed/2024-09-07-site-reliability-engineering-part-4.html</id>
        <updated>2024-09-07T16:27:58+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I'm currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.</summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='site-reliability-engineering---part-4-onboarding-for-on-call-engineers'>Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers</h1><br />
<br />
<span class='quote'>Published at 2024-09-07T16:27:58+03:00</span><br />
<br />
<span>Welcome to Part 4 of my Site Reliability Engineering (SRE) series. I&#39;m currently working as a Site Reliability Engineer, and I’m here to share what SRE is all about in this blog series.</span><br />
<br />
<a class='textlink' href='./2023-08-18-site-reliability-engineering-part-1.html'>2023-08-18 Site Reliability Engineering - Part 1: SRE and Organizational Culture</a><br />
<a class='textlink' href='./2023-11-19-site-reliability-engineering-part-2.html'>2023-11-19 Site Reliability Engineering - Part 2: Operational Balance</a><br />
<a class='textlink' href='./2024-01-09-site-reliability-engineering-part-3.html'>2024-01-09 Site Reliability Engineering - Part 3: On-Call Culture</a><br />
<a class='textlink' href='./2024-09-07-site-reliability-engineering-part-4.html'>2024-09-07 Site Reliability Engineering - Part 4: Onboarding for On-Call Engineers (You are currently reading this)</a><br />
<a class='textlink' href='./2026-03-01-site-reliability-engineering-part-5.html'>2026-03-01 Site Reliability Engineering - Part 5: System Design, Incidents, and Learning</a><br />
<br />
<pre>
       __..._   _...__
  _..-"      `Y`      "-._
  \ Once upon |           /
  \\  a time..|          //
  \\\         |         ///
   \\\ _..---.|.---.._ ///
jgs \\`_..---.Y.---.._`//	
</pre>
<br />
<span>This time, I want to share some tips on how to onboard software engineers, QA engineers, and Site Reliability Engineers (SREs) to the primary on-call rotation. Traditionally, onboarding might take half a year (depending on the complexity of the infrastructure), but with a bit of strategy and structured sessions, we&#39;ve managed to reduce it to just six weeks per person. Let&#39;s dive in!</span><br />
<br />
<h2 style='display: inline' id='setting-the-scene-tier-1-on-call-rotation'>Setting the Scene: Tier-1 On-Call Rotation</h2><br />
<br />
<span>First things first, let&#39;s talk about Tier-1. This is where the magic begins. Tier-1 covers over 80% of the common on-call cases and is the perfect breeding ground for new on-call engineers to get their feet wet. It&#39;s designed to be manageable training ground.</span><br />
<br />
<h3 style='display: inline' id='why-tier-1'>Why Tier-1?</h3><br />
<br />
<ul>
<li>Easy to Understand: Every on-call engineer should be familiar with Tier-1 tasks. </li>
<li>Training Ground: This is where engineers start their on-call career. It&#39;s purposefully kept simple so that it&#39;s not overwhelming right off the bat.</li>
<li>Runbook/recipe driven: Every alert is attached to a comprehensive runbook, making it easy for every engineer to follow.</li>
</ul><br />
<h2 style='display: inline' id='onboarding-process-from-6-months-to-6-weeks'>Onboarding Process: From 6 Months to 6 Weeks</h2><br />
<br />
<span>So how did we cut down the onboarding time so drastically? Here’s the breakdown of our process:</span><br />
<br />
<span>Knowledge Transfer (KT) Sessions: We kicked things off with more than 10 KT sessions, complete with video recordings. These sessions are comprehensive and cover everything from the basics to some more advanced topics. The recorded sessions mean that new engineers can revisit them anytime they need a refresher.</span><br />
<br />
<span>Shadowing Sessions: Each new engineer undergoes two on-call week shadowing sessions. This hands-on experience is invaluable. They get to see real-time incident handling and resolution, gaining practical knowledge that&#39;s hard to get from just reading docs.</span><br />
<br />
<span>Comprehensive Runbooks: We created 64 runbooks (by the time writing this probably more than 100) that are composable like Lego bricks. Each runbook covers a specific scenario and guides the engineer step-by-step to resolution. Pairing these with monitoring alerts linked directly to Confluence docs, and from there to the respective runbooks, ensures every alert can be navigated with ease (well, there are always exceptions to the rule...).</span><br />
<br />
<span>Self-Sufficiency &amp; Confidence Building: With all these resources at their fingertips, our on-call engineers become self-sufficient for most of the common issues they&#39;ll face (new starters can now handle around 80% of the most common issue after 6 weeks they had joined the company). This boosts their confidence and ensures they can handle Tier-1 incidents independently.</span><br />
<br />
<span>Documentation and Feedback Loop: Continuous improvement is key. We regularly update our documentation based on feedback from the engineers. This makes our process even more robust and user-friendly.</span><br />
<br />
<h2 style='display: inline' id='it-s-all-about-the-tiers'>It&#39;s All About the Tiers</h2><br />
<br />
<span>Let’s briefly touch on the Tier levels:</span><br />
<br />
<ul>
<li>Tier 1: Easy and foundational tasks. Perfect for getting new engineers started. This covers around 80% of all on-call cases we face. This is what we trained on.</li>
<li>Tier 2: Slightly more complex, requiring more background knowledge. We trained on some of the topics but not all.</li>
<li>Tier 3: Requires a good understanding of the platform/architecture. Likely needs KT sessions with domain experts.</li>
<li>Tier DE (Domain Expert): The heavy hitters. Domain experts are required for these tasks. </li>
</ul><br />
<h3 style='display: inline' id='growing-into-higher-tiers'>Growing into Higher Tiers</h3><br />
<br />
<span>From Tier-1, engineers naturally grow into Tier-2 and beyond. The structured training and gradual increase in complexity help ensure a smooth transition as they gain experience and confidence. The key here is that engineers stay curous and engaged in the on-call, so that they always keep learning.</span><br />
<br />
<h2 style='display: inline' id='keeping-runbooks-up-to-date'>Keeping Runbooks Up to Date</h2><br />
<br />
<span>It is important that runbooks are not a "project to be finished"; runbooks have to be maintained and updated over time. Sections may change, new runbooks need to be added, and old ones can be deleted. So the acceptance criteria of an on-call shift would not just be reacting to alerts and incidents, but also reviewing and updating the current runbooks.</span><br />
<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>By structuring the onboarding process with KT sessions, shadowing, comprehensive runbooks, and a feedback loop, we&#39;ve been able to fast-track the process from six months to just six weeks. This not only prepares our engineers for the on-call rotation quicker but also ensures they&#39;re confident and capable when handling incidents.</span><br />
<br />
<span>If you&#39;re looking to optimize your on-call onboarding process, these strategies could be your ticket to a more efficient and effective transition. Happy on-calling!</span><br />
<br />
<span>Continue with the fifth part of this series:</span><br />
<br />
<a class='textlink' href='./2026-03-01-site-reliability-engineering-part-5.html'>2026-03-01 Site Reliability Engineering - Part 5: System Design, Incidents, and Learning</a><br />
<br />
<span>E-Mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
    <entry>
        <title>Projects I financially support</title>
        <link href="https://foo.zone/gemfeed/2024-09-07-projects-i-support.html" />
        <id>https://foo.zone/gemfeed/2024-09-07-projects-i-support.html</id>
        <updated>2024-09-07T16:04:19+03:00</updated>
        <author>
            <name>Paul Buetow aka snonux</name>
            <email>paul@dev.buetow.org</email>
        </author>
        <summary>This is the list of projects and initiatives I support/sponsor. </summary>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1 style='display: inline' id='projects-i-financially-support'>Projects I financially support</h1><br />
<br />
<span class='quote'>Published at 2024-09-07T16:04:19+03:00</span><br />
<br />
<span>This is the list of projects and initiatives I support/sponsor. </span><br />
<br />
<pre>
||====================================================================||
||//$\\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\//$\\||
||(100)==================| FEDERAL SPONSOR NOTE |================(100)||
||\\$//        ~         &#39;------========--------&#39;                \\$//||
||&lt;&lt; /        /$\              // ____ \\                         \ &gt;&gt;||
||&gt;&gt;|  12    //L\\            // ///..) \\         L38036133B   12 |&lt;&lt;||
||&lt;&lt;|        \\ //           || &lt;||  &gt;\  ||                        |&gt;&gt;||
||&gt;&gt;|         \$/            ||  $$ --/  ||        One Hundred     |&lt;&lt;||
||&lt;&lt;|      L38036133B        *\\  |\_/  //* series                 |&gt;&gt;||
||&gt;&gt;|  12                     *\\/___\_//*   1989                  |&lt;&lt;||
||&lt;&lt;\      Open Source   ______/Franklin\________     Supporting   /&gt;&gt;||
||//$\                 ~| SPONSORING AND FUNDING |~               /$\\||
||(100)===================  AWESOME OPEN SOURCE =================(100)||
||\\$//\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\\$//||
||====================================================================||
 
</pre>
<br />
<h2 style='display: inline' id='table-of-contents'>Table of Contents</h2><br />
<br />
<ul>
<li><a href='#projects-i-financially-support'>Projects I financially support</a></li>
<li>⇢ <a href='#motivation'>Motivation</a></li>
<li>⇢ <a href='#osnews'>OSnews</a></li>
<li>⇢ <a href='#cup-o--go-podcast'>Cup o&#39; Go Podcast</a></li>
<li>⇢ <a href='#codeberg'>Codeberg</a></li>
<li>⇢ <a href='#grapheneos'>GrapheneOS</a></li>
<li>⇢ <a href='#ankidroid'>AnkiDroid</a></li>
<li>⇢ <a href='#openbsd-through-openbsdamsterdam'>OpenBSD through OpenBSD.Amsterdam</a></li>
<li>⇢ <a href='#protonmail'>ProtonMail</a></li>
<li>⇢ <a href='#librofm'><span class='inlinecode'>Libro.fm</span></a></li>
</ul><br />
<h2 style='display: inline' id='motivation'>Motivation</h2><br />
<br />
<span>Sponsoring free and open-source projects, even for personal use, is important to ensure the sustainability, security, and continuous improvement of the software. It supports developers who often maintain these projects without compensation, helping them provide updates, new features, and security patches. By contributing, you recognize their efforts, foster a culture of innovation, and benefit from perks like early access or support, all while ensuring the long-term viability of the tools you rely on.</span><br />
<br />
<span>Albeit I am not putting a lot of money into my sponsoring efforts, it still helps the open-source maintainers because the more little sponsors there are, the higher the total sum.</span><br />
<br />
<h2 style='display: inline' id='osnews'>OSnews</h2><br />
<br />
<span>I am a silver Patreon member of OSnews. I have been following this site since my student years. It&#39;s always been a great source of independent and slightly alternative IT news.</span><br />
<br />
<a class='textlink' href='https://osnews.com'>https://osnews.com</a><br />
<br />
<h2 style='display: inline' id='cup-o--go-podcast'>Cup o&#39; Go Podcast</h2><br />
<br />
<span>I am a Patreon of the Cup o&#39; Go Podcast. The podcast helps me stay updated with the Go community for around 15 minutes per week. I am not a full-time software developer, but my long-term ambition is to become better in Go every week by working on personal projects and tools for work.</span><br />
<br />
<a class='textlink' href='https://cupogo.dev'>https://cupogo.dev</a><br />
<br />
<h2 style='display: inline' id='codeberg'>Codeberg</h2><br />
<br />
<span>Codeberg e.V. is a nonprofit organization that provides online resources for software development and collaboration. I am a user and a supporting member, paying an annual membership of €24. I didn&#39;t have to pay that membership fee, as Codeberg offers all the services I use for free.</span><br />
<br />
<a class='textlink' href='https://codeberg.org'>https://codeberg.org</a><br />
<a class='textlink' href='https://codeberg.org/snonux'>https://codeberg.org/snonux - My Codeberg page</a><br />
<br />
<h2 style='display: inline' id='grapheneos'>GrapheneOS</h2><br />
<br />
<span>GrapheneOS is an open-source project that improves Android&#39;s privacy and security with sandboxing, exploit mitigations, and a permission model. It does not include Google apps or services but offers a sandboxed Google Play compatibility layer and its own apps and services. </span><br />
<br />
<span>I&#39;ve made a one-off €100 donation because I really like this, and I run GrapheneOS on my personal Phone as my main daily driver.</span><br />
<br />
<a class='textlink' href='https://grapheneos.org/'>https://grapheneos.org/</a><br />
<a class='textlink' href='https://foo.zone/gemfeed/2023-01-23-why-grapheneos-rox.html'>Why GrapheneOS Rox</a><br />
<br />
<h2 style='display: inline' id='ankidroid'>AnkiDroid</h2><br />
<br />
<span>AnkiDroid is an app that lets you learn flashcards efficiently with spaced repetition. It is compatible with Anki software and supports various flashcard content, syncing, statistics, and more.</span><br />
<br />
<span>I&#39;ve been learning vocabulary with this free app, and it is, in my opinion, the best flashcard app I know. I&#39;ve made a 20$ one-off donation to this project.</span><br />
<br />
<a class='textlink' href='https://opencollective.com/ankidroid'>https://opencollective.com/ankidroid</a><br />
<br />
<h2 style='display: inline' id='openbsd-through-openbsdamsterdam'>OpenBSD through OpenBSD.Amsterdam</h2><br />
<br />
<span> The OpenBSD project produces a FREE, multi-platform 4.4BSD-based UNIX-like operating system. Our efforts emphasize portability, standardization, correctness, proactive security and integrated cryptography. As an example of the effect OpenBSD has, the popular OpenSSH software comes from OpenBSD. OpenBSD is freely available from their download sites.</span><br />
<br />
<span>I implicitly support the OpenBSD project through a VM I have rented at OpenBSD Amsterdam. They donate €10 per VM and €15 per VM for every renewal to the OpenBSD Foundation, with dedicated servers running vmm(4)/vmd(8) to host opinionated VMs.</span><br />
<br />
<a class='textlink' href='https://www.OpenBSD.org'>https://www.OpenBSD.org</a><br />
<a class='textlink' href='https://OpenBSD.Amsterdam'>https://OpenBSD.Amsterdam</a><br />
<br />
<h2 style='display: inline' id='protonmail'>ProtonMail</h2><br />
<br />
<span>I am not directly funding this project, but I am a very happy paying customer, and I am listing it here as an alternative to big tech if you don&#39;t want to run your own mail infrastructure. I am listing ProtonMail here as it is a non-profit organization, and I want to emphasize the importance of considering alternatives to big tech.</span><br />
<br />
<a class='textlink' href='https://proton.me/'>https://proton.me/</a><br />
<br />
<h2 style='display: inline' id='librofm'><span class='inlinecode'>Libro.fm</span></h2><br />
<br />
<span>This is the alternative to Audible if you are into audiobooks (like I am). For every book or every month of membership, I am also supporting a local bookstore I selected. Their catalog is not as large as Audible&#39;s, but it&#39;s still pretty decent.</span><br />
<br />
<span>Libro.fm began as a conversation among friends at Third Place Books, a local bookstore in Seattle, Washington, about the growing popularity of audiobooks and the lack of a way for readers to purchase them from independent bookstores. Flash forward, and Libro.fm was founded in 2014.</span><br />
<br />
<a class='textlink' href='https://libro.fm'>https://libro.fm</a><br />
<br />
<span>E-mail your comments to <span class='inlinecode'>paul@nospam.buetow.org</span> :-)</span><br />
<br />
<a class='textlink' href='../'>Back to the main site</a><br />
            </div>
        </content>
    </entry>
</feed>
