Keith Maxwell’s Blog

How to install Times New Roman in 2026

Posted on Monday 9 February 2026.

This post feels like a throw back to the late 1990s. I'm publishing it because the only straightforward instructions that I can find are for Debian / Ubuntu.

I wanted to check the appearance of this blog while working on the CSS. I was looking at layout-shifts and the PageSpeed Insights score and I wanted to check the appearance with a standard, default font. Google Chrome on this version of Fedora Linux depends on the Liberation fonts. I understand that Times New Roman is both very common and the default serif font for most browsers. So I wanted to check the appearance of this blog with Times New Roman.

In the late 1990s, Times New Roman along with Arial, Courier New, Webdings (!) and other fonts, were published under a license on https://micrososoft.com. The license permits redistributing the fonts in their original form, so the original .exe files are now mirrored on SourceForge. Debian publishes a package, msttcorefonts, to install the fonts from these .exe files as .ttf files to the local file system. The rest of this post demonstrates obtaining the .ttf files this way using an Incus container; the same approach works for Arial and the other fonts.

Commands to create an Incus container, prompt to accept the EULA, install the Microsoft fonts, copy them to the host and then clean up the container:

incus launch images:debian/13 c1 \
&& incus exec c1 -- sh -c "apt update && apt install --yes msttcorefonts" \
&& incus file pull --recursive c1/usr/share/fonts/truetype/msttcorefonts . \
&& incus stop c1 \
&& incus delete c1

Command to install Times New Roman for the current user:

mkdir --parents ~/.local/share/fonts \
&& cp msttcorefonts/Times_New_Roman.ttf ~/.local/share/fonts
Check that fontconfig can find Times New Roman

Command to list the fonts available to fontconfig filtered for Times New Roman:

fc-list | grep -e Times.New.Roman

Expected output:

/home/maxwell-k/.local/share/fonts/Times_New_Roman.ttf: Times New Roman:style=Regular,Normal,obyčejnΓ©,Standard,Κανονικά,Normaali,NormΓ‘l,Normale,Standaard,Normalny,ΠžΠ±Ρ‹Ρ‡Π½Ρ‹ΠΉ,NormΓ‘lne,Navadno,thΖ°Ζ‘Μ€ng,Arrunta

Is uv the best tool to answer β€˜what dependencies does X bring in’?

Posted on Monday 2 February 2026.

Sometimes; other times I prefer johnnydep or pipdeptree. This post discusses the advantages of each from my perspective.

I recently started using pymupdf. First impressions are that it is a very capable AGPL-3.0 Python library for analysing and extracting information from PDFs. Before committing to pymupdf I wanted to understand β€˜what dependencies does pymupdf bring in’?

Command to display the pymupdf dependencies:

uv tool run johnnydep --verbose=0 pymupdf

Output:

name     summary
-------  ------------------------------------------------------------------------------------------------------------------------
pymupdf  A high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.

Brilliant. No other dependencies.

This is an example of a common question I ask when I work with Python: β€˜what dependencies does X bring in’? Often X is an open source package from PyPI; sometimes its a proprietary package from elsewhere.

The answer for pymupdf is very simple: none. Another package that I looked at recently β€” gkeepapi β€” gives a less simple answer. That's a better illustration for the rest of this discussion.

What dependencies does gkeepapi bring in?

Command to display the gkeepapi dependencies:

uv tool run johnnydep --verbose=0 gkeepapi

Output:

name                                  summary
------------------------------------  -------------------------------------------------------------------------------------------------------
gkeepapi                              An unofficial Google Keep API client
β”œβ”€β”€ future>=0.16.0                    Clean single-source support for Python 3 and 2
└── gpsoauth>=1.1.0                   A python client library for Google Play Services OAuth.
    β”œβ”€β”€ pycryptodomex>=3.0            Cryptographic library for Python
    β”œβ”€β”€ requests>=2.0.0               Python HTTP for Humans.
    β”‚   β”œβ”€β”€ certifi>=2017.4.17        Python package for providing Mozilla's CA Bundle.
    β”‚   β”œβ”€β”€ charset_normalizer<4,>=2  The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.
    β”‚   β”œβ”€β”€ idna<4,>=2.5              Internationalized Domain Names in Applications (IDNA)
    β”‚   └── urllib3<3,>=1.21.1        HTTP library with thread-safe connection pooling, file post, and more.
    └── urllib3>=1.26.0               HTTP library with thread-safe connection pooling, file post, and more.

A critical reader might point out that I used two tools in the examples above, I used both (1) uv and (2) johnnydep. Why did I choose to use two tools?

I've used johnnydep for a long time and I love the simplicity of its interface.

The default tool that I reach for when working with Python packages is uv. Initially I adopted uv because of its speed; it also has a very active and encouraging maintainer team.

For investigating dependencies, uv has a tree subcommand that requires either a project configured with a pyproject.toml file or a script with inline dependency metadata.

Let's start with the latter as example.py:

#!/usr/bin/env -S uv run --script
"""Simple example."""

# /// script
# requires-python = ">=3.13"
# dependencies = ["gkeepapi"]
# ///

import gkeepapi

if __name__ == "__main__":
    print(gkeepapi.__version__)

Command to demonstrate uv tree:

uv tree --script example.py

Output:

Resolved 9 packages in 7ms
gkeepapi v0.17.1
β”œβ”€β”€ future v1.0.0
└── gpsoauth v2.0.0
    β”œβ”€β”€ pycryptodomex v3.23.0
    β”œβ”€β”€ requests v2.32.5
    β”‚   β”œβ”€β”€ certifi v2026.1.4
    β”‚   β”œβ”€β”€ charset-normalizer v3.4.4
    β”‚   β”œβ”€β”€ idna v3.11
    β”‚   └── urllib3 v2.6.3
    └── urllib3 v2.6.3

The third option that I will sometimes reach for is pipdeptree.

Pipdeptree works with installed Python packages, for example from a virtual environment. Analysing installed packages is often a huge benefit when working with proprietary software. Installing packages from a company's infrastructure is typically a solved problem. Analysing installed packages avoids integrating the analysis tool with source control or company infrastructure like an internal package repository.

Pipdeptree can output visualisations via GraphViz and I have found that graphical output invaluable. I have incorporated it into both written material and presentations to stakeholders. Visualising dependency relationships as a graph can really help with communication.

Uv's tree subcommand and the name pipdeptree both suggest working with trees. A property of a tree is that it is acyclic, in other words it does not contain any loops. Unfortunately not every Python dependency graph is acyclic.

Professionally, I've worked with sets of twenty or thirty proprietary packages that include cycles in their dependencies graphs. One package depends on another that in turn depends on the first. I recommend avoiding cycles. They can surprise developers for example requiring coordination when releasing new versions. If cycles are unavoidable then ensuring they are well understood with tools like pipdeptree and GraphViz helps.

Pipdeptree also shows any dependency ranges specified in package metadata and a number of warnings. Both can be very helpful when debugging packaging or installation issues.

Commands to demonstrate pipdeptree:

uv tool run virtualenv --quiet .venv \
&& uv pip install --quiet gkeepapi pipdeptree \
&& .venv/bin/pipdeptree

Output:

gkeepapi==0.17.1
β”œβ”€β”€ gpsoauth [required: >=1.1.0, installed: 2.0.0]
β”‚   β”œβ”€β”€ pycryptodomex [required: >=3.0, installed: 3.23.0]
β”‚   β”œβ”€β”€ requests [required: >=2.0.0, installed: 2.32.5]
β”‚   β”‚   β”œβ”€β”€ charset-normalizer [required: >=2,<4, installed: 3.4.4]
β”‚   β”‚   β”œβ”€β”€ idna [required: >=2.5,<4, installed: 3.11]
β”‚   β”‚   β”œβ”€β”€ urllib3 [required: >=1.21.1,<3, installed: 2.6.3]
β”‚   β”‚   └── certifi [required: >=2017.4.17, installed: 2026.1.4]
β”‚   └── urllib3 [required: >=1.26.0, installed: 2.6.3]
└── future [required: >=0.16.0, installed: 1.0.0]
pipdeptree==2.30.0
β”œβ”€β”€ packaging [required: >=25, installed: 26.0]
└── pip [required: >=25.2, installed: 25.3]

I appreciate that I've introduced another tool above β€” virtualenv. This is to avoid a warning from pipdeptree. I'll go into more detail on that warning in a follow up post.

To recap, when I'm thinking β€˜what dependencies does X bring in’ I reach for:

  1. johnnydep if X is straightforward or if X is on PyPI or
  2. uv tree if the dependency on X is already or easily codified in inline script metadata or pyproject.toml or
  3. pipdeptree if X is proprietary, if I want to visualise the dependency graph or if I want detailed information on version ranges.

How to avoid setting a hostname using Cloud-init base configuration

Posted on Monday 1 December 2025.

In resolving an error running an Incus container on GitHub Actions, I recently learnt about Cloud-init base configuration. This post describes the error, a solution and behaviour with user-data that I found unintuitive.

To make integration tests running on GitHub Actions more portable I often use Incus. Recently launching an images:fedora/43/cloud container began to fail with an error "Failed to set the hostname…". The Cloud-init logs didn't help identify a root cause.

Excerpt from /var/log/cloud-init.log
2025-11-30 13:14:38,815 - subp.py[DEBUG]: Running command ['hostnamectl', 'set-hostname', 'c1'] with allowed return codes [0] (shell=False, capture=True)
2025-11-30 13:14:38,820 - log_util.py[WARNING]: Failed to set the hostname to c1 (c1)
2025-11-30 13:14:38,820 - log_util.py[DEBUG]: Failed to set the hostname to c1 (c1)
Traceback (most recent call last):
  File "/usr/lib/python3.14/site-packages/cloudinit/config/cc_set_hostname.py", line 86, in handle
    cloud.distro.set_hostname(hostname, fqdn)
    ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.14/site-packages/cloudinit/distros/__init__.py", line 392, in set_hostname
    self._write_hostname(writeable_hostname, self.hostname_conf_fn)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.14/site-packages/cloudinit/distros/rhel.py", line 119, in _write_hostname
    subp.subp(["hostnamectl", "set-hostname", str(hostname)])
    ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.14/site-packages/cloudinit/subp.py", line 291, in subp
    raise ProcessExecutionError(
        stdout=out, stderr=err, exit_code=rc, cmd=args
    )
cloudinit.subp.ProcessExecutionError: Unexpected error while running command.
Command: ['hostnamectl', 'set-hostname', 'c1']
Exit code: 1
Reason: -
Stdout:
Stderr: Failed to connect to system scope bus via local transport: No such file or directory
2025-11-30 13:14:38,822 - main.py[DEBUG]: Failed setting hostname in local stage. Will retry in network stage. Error: Failed to set the hostname to c1 (c1): Unexpected error while run
Command: ['hostnamectl', 'set-hostname', 'c1']
Exit code: 1
Reason: -
Stdout:
Stderr: Failed to connect to system scope bus via local transport: No such file or directory.

The integration tests in question did not depend upon the hostname so I disabled the calls to hostnamectl. There are two related Cloud-init modules that can call hostnamectl: Set Hostname and Update Hostname. Both accept a configuration option:

preserve_hostname: (boolean) If true, the hostname will not be changed. Default: false.

With preserve_hostname: true in the base configuration in /etc/cloud/cloud.cfg.d/*.cfg, Cloud-init does not run hostnamectl.

Contents of 99-preserve-hostname.cfg:

preserve_hostname: true

Command to launch a container with a custom base configuration:

incus create images:fedora/43/cloud c1 \
&& incus file push 99-preserve-hostname.cfg c1/etc/cloud/cloud.cfg.d/ \
&& incus start c1

Command to view log excerpts :

incus exec c1 -- grep -e preserve_hostname -e hostnamectl /var/log/cloud-init.log

Output of command to view log excerpts:

2025-11-30 18:53:11,841 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,454 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,501 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,502 - cc_update_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not updating the hostname in module update_hostname

This solution worked! A number of other potential solutions didn't. Disabling AppArmour as suggested by a forum post didn't help.

Reading the Cloud-init documentation about specifying configuration, user-data appears to be the appropriate place for an end user like me to specify preserve_hostname. Unfortunately after setting preserve_hostname in user-data, Cloud-init still calls hostnamectl.

Command to launch a container with preserve_hostname set in user-data:

incus launch images:fedora/43/cloud c1 <<EOF
config:
  cloud-init.user-data: |
    #cloud-config
    preserve_hostname: true
EOF

Output of command to view log excerpts (above):

2025-11-30 18:59:51,377 - subp.py[DEBUG]: Running command ['hostnamectl', 'set-hostname', 'c1'] with allowed return codes [0] (shell=False, capture=True)
2025-11-30 18:59:51,447 - performance.py[DEBUG]: Running ['hostnamectl', 'set-hostname', 'c1'] took 0.070 seconds
2025-11-30 18:59:51,712 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:59:51,713 - cc_update_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not updating the hostname in module update_hostname

The above log excerpts show that early in the Cloud-init run hostnamectl is called. They also show that later Cloud-init recognises the preserve_hostname configuration option and does not set the hostname. I found this unintuitive. Perhaps that is just an admission of the limits of my understanding of Cloud-init.

This investigation was a reminder that Cloud-init is complex. I can also think of many adjectives with more positive connotations for Cloud-init: powerful, flexible, widely adopted…


Splitting up a .apk file

Posted on Friday 14 November 2025.

This post starts with an explanation of the .apk file format from Alpine Linux. After that I demonstrate how the explanation matches an example file and I calculate checksums to match the package repository index. This .apk format is not the file format used by Android. Alpine Package Keeper is the name of the package manager for Alpine Linux, typically abbreviated apk.

Why? Because after reading (1) the apk spec and (2) a blog post titled β€˜APK, the strangest format’, I was left with questions. For example:

Does a .apk file have two gzip streams or three?

A .apk file contains three deflate compressed gzip streams. Each gzip stream contains data in tar format. In order:

Stream Contents End of file marker Demonstration file name
1 Signature for stream 2 No 1.tar.gz
2 Metadata including .PKGINFO No control.tar.gz
3 Files to be installed Yes data.tar.gz

To prepare that summary table I looked into the process for creating a .apk with abuild, Alpine Linux's build tool. The abuild repository includes abuild-sign.

To create a .apk file:

  1. abuild creates data.tar.gz; this gzip stream is stream 3
  2. 〃 creates a tar file containing metadata
  3. 〃 calls abuild-tar --cut to remove the end of file marker
  4. 〃 calls gzip on the result; this gzip stream is stream 2
  5. 〃 calls abuild-sign on stream 2
  6. abuild-sign creates a signature for stream 2 using a private key
  7. 〃 adds that signature to another tar file
  8. 〃 removes the end of file marker
  9. 〃 compresses the result with gzip; this gzip stream is stream 1
  10. 〃 prepends stream 1 to stream 2
  11. abuild prepends the result, streams 1 and 2, to stream 3

The result is a .apk file made up of the three streams in order!

The most relevant part of abuild is from line 1894 onwards showing how stream 2 is created, abuild-sign is called and then streams 1 and 2 are prepended to stream 3:

apk_tar -T - < .metafiles | abuild-tar --cut \
            | $gzip -n -9 > control.tar.gz
abuild-sign -q control.tar.gz || exit 1

msg "Create $apk"
mkdir -p "$REPODEST/$repo/$(arch2dir "$subpkgarch")"
cat control.tar.gz data.tar.gz > "$REPODEST/$repo/$(arch2dir "$subpkgarch")/$apk"

The most relevant part of abuild-sign is from line 42 showing how stream 1 is created and prepended to stream 2:

apk_tar --owner=0 --group=0 --numeric-owner "$sig" | abuild-tar --cut | $gzip -n -9 > "$tmptargz"
tmpsigned=$(mktemp)
cat "$tmptargz" "$i" > "$tmpsigned"

The other relevant source code that I looked into was apk_tar in functions.sh and abuild-tar.c.

Treating a .apk as a .tar.gz

The tar format was originally developed for archiving files to magnetic tape storage. The end of an archive is marked with zeroes as an end of file marker. These markers were necessary because the tapes did not use a file system or other metadata. The end of a tar file on a disk is implied from other metadata. The apk spec terms tar archives without end of file markers β€˜tar segments’.

Wikipedia explains that a gzip stream can only compress a single file. If three streams are concatenated and then decompressed the output is a single file.

In constructing .apk files the end of file markers are removed from the streams 1 and 2. Stream 3 has an end of file marker. If the three streams in a .apk file are decompressed together the result is a tar file with a single end of file marker. Files can therefore be extracted from a .apk file as if it were a single .tar.gz file.

Examining an example file

The gzip format is specified in RFC1952. β€˜Section 2.3.1. Member header and trailer’ shows that each stream should start with three bytes:

  1. 31 for ID1
  2. 139 for ID2
  3. 8 for the deflate Compression Method (CM)

Searching for these three bytes inside an example .apk file will help confirm the explanation above. This example uses the apk-tools-static package from the 3.22 release of Alpine Linux; latest-stable at the time of writing.

fetch_url.py
"""Fetch information about a package from APKINDEX."""

import gzip
import tarfile
from binascii import a2b_base64, hexlify
from io import BytesIO
from sys import argv
from urllib.request import urlopen

REPOSITORY = "https://dl-cdn.alpinelinux.org/alpine/v3.22/main"
ARCHITECTURE = "x86_64"

_FIELD = "C:"
_SHA1 = "Q1"
_APKINDEX_URL = f"{REPOSITORY}/{ARCHITECTURE}/APKINDEX.tar.gz"


def _main() -> int:
    if len(argv) == 2:
        package = argv[1]
    else:
        package = "apk-tools-static"
    block = _get_block(_apkindex(), package)
    line = _get_line(block, _FIELD)
    print(_get_url(block, package))
    print(line)
    base64 = line.removeprefix(_FIELD + _SHA1)
    print(hexlify(a2b_base64(base64)).decode())
    return 0


def _get_line(block: str, prefix: str) -> str:
    return [i for i in block.splitlines() if i.startswith(prefix)][0]


def _get_field(block, prefix: str) -> str:
    return _get_line(block, prefix).removeprefix(prefix)


def _get_url(block: str, package: str) -> str:
    version = _get_field(block, "V:")
    return f"{REPOSITORY}/{ARCHITECTURE}/{package}-{version}.apk"


def _get_block(apkindex: str, package: str) -> str:
    blocks = _apkindex().strip().split("\n\n")
    return next(filter(lambda i: _get_field(i, "P:") == package, blocks))


def _apkindex() -> str:
    with urlopen(_APKINDEX_URL) as response:
        compressed_data = response.read()

    compressed_stream = BytesIO(compressed_data)

    with gzip.open(compressed_stream, "rb") as gz, tarfile.open(fileobj=gz) as tar:
        fileobj = tar.extractfile("APKINDEX")
        if fileobj is None:
            return ""
        with fileobj as file:
            content = file.read()
    return content.decode()


if __name__ == "__main__":
    raise SystemExit(_main())

Command do display a URL for an example file; with a checksum from the APKINDEX displayed twice; the second time as hexadecimal:

python fetch_url.py

Output:

https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/apk-tools-static-2.14.9-r3.apk
C:Q1a98grx1S3fI18wuhEHZPelGxtPo=
6bdf20af1d52ddf235f30ba110764f7a51b1b4fa

Command to download the example .apk file:

wcurl \
  https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/apk-tools-static-2.14.9-r3.apk
hash.py
"""Calculate a checksum line to match APKINDEX from a .apk file."""

from base64 import b64encode
from hashlib import sha1
from pathlib import Path
from sys import argv

HEADER = bytes([31, 139, 8])
PREFIX = "C:Q1"


def _main() -> int:
    if len(argv) != 2:
        print("No filename provided.")
        return 1

    file = Path(argv[1])
    with file.open("rb") as file:
        data = file.read()

    control_start = data.find(HEADER, len(HEADER))
    data_start = data.rfind(HEADER)

    checksum = sha1()
    checksum.update(data[control_start:data_start])
    print(PREFIX + b64encode(checksum.digest()).decode())
    return 0


# ruff: noqa: S324 Alpine Linux uses SHA1 in APKINDEX


if __name__ == "__main__":
    raise SystemExit(_main())

Command to generate a checksum line from the downloaded file:

python hash.py apk-tools-static-2.14.9-r3.apk

Output:

C:Q1a98grx1S3fI18wuhEHZPelGxtPo=
split.py
"""Split up a .apk file."""

from pathlib import Path
from sys import argv

HEADER = bytes([31, 139, 8])


def _main() -> int:
    if len(argv) != 2:
        print("No filename provided.")
        return 1

    file = Path(argv[1])
    with file.open("rb") as file:
        data = file.read()

    control_start = data.find(HEADER, len(HEADER))
    data_start = data.rfind(HEADER)

    Path("1.tar.gz").write_bytes(data[:control_start])
    Path("control.tar.gz").write_bytes(data[control_start:data_start])
    Path("data.tar.gz").write_bytes(data[data_start:])

    return 0


# ruff: noqa: S324 Alpine Linux uses SHA1


if __name__ == "__main__":
    raise SystemExit(_main())

Command to split up an apk file into three:

python split.py apk-tools-static-2.14.9-r3.apk

Shell session showing the contents of the three files:

% tar tf 1.tar.gz
.SIGN.RSA.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub

% tar tf control.tar.gz
.PKGINFO

% tar --warning=no-unknown-keyword -tf data.tar.gz
sbin/
sbin/apk.static
sbin/apk.static.SIGN.RSA.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub
sbin/apk.static.SIGN.RSA.sha256.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub

% sha1sum control.tar.gz
6bdf20af1d52ddf235f30ba110764f7a51b1b4fa  control.tar.gz

This above is the hexadecimal encoding of the same checksum that is encoded with base64 and prefixed with C:Q1 in the APKINDEX. This matches the output from ./fetch_url.py above.

Shell session to demonstrate the datahash field from .PKGINFO in control.tar.gz:

% tar xf control.tar.gz .PKGINFO

% tail -n 1 .PKGINFO
datahash = 0845a99f49833a760e8f1745417fc1bba0c4740a40ec10288537e3acd9f045a9

% sha256sum data.tar.gz
0845a99f49833a760e8f1745417fc1bba0c4740a40ec10288537e3acd9f045a9  data.tar.gz

Why are you writing about the .apk file format?

Because for the first time in ages I'm enthusiastic about Alpine Linux.

The first Linux distribution I ever used was Slackware. Then for a long time Gentoo. Back in 2017–2020 I was full of enthusiasm for Alpine Linux. Alpine Linux was originally based on Gentoo. I was a fan; even a contributor. After that I drifted away; I only use Alpine Linux occasionally and my contributions stopped.

I recently read a blog post by Filippo Valsorda, who maintains the Go cryptography standard library. Filippo writes about running a Linux based Network Attached Storage device from RAM; a topic I hope to revisit. He described Alpine Linux as:

a simple, well-packaged, lightweight, GNU-less Linux distribution

I also recently read about Chimera Linux which has a FreeBSD user land and build recipes written in Python. It tries to innovate and overall:

[Chimera Linux] wants to be simple and grokkable, but also practical and unassuming.

Why am I talking about Chimera Linux? Because it also uses apk-tools from Alpine Linux. Version 3 of apk-tools is in the final stages of testing before a stable release as of July 2025. I am in the middle of setting up my own hardware running Alpine Linux for the first time in at least five years and I hope to post on the topic again soon.


How I use the Zig build system to install ncdu

Posted on Thursday 16 October 2025.

I have an old Linux system that I intend to backup and then update. Before performing a manual backup I like to understand disk usage. I have used a few different tools for this task:

Project Language Notes
coreutils C Widely available!
diskonaut Go No release in 5 years, no commits in 3.
dua-cli Rust
ncdu C / Zig Wikipedia page

This year ncdu is 18 years old. A few years ago I read about a rewrite. I was a fan of the original version. Version 2 of ncdu is implemented in Zig. The only Zig software that I regularly use today is Ghostty; and I use Ghostty on both Mac and Linux.

I'm interested in Zig the language and Zig the build system. For example I found the case study of Uber using the Zig tool chain for cross compilation interesting. The Zig website states:

Not only can you write Zig code instead of C or C++ code, but you can use Zig as a replacement for autotools, cmake, make, scons, ninja, etc.

Much more than the alternatives mentioned, I'm interested in learning more about the Zig build system and this post is a chance to try it out. Below I use a Fedora Linux 42 container to build the latest release of ncdu from source. I chose Fedora Linux 42, released on 15 April 2025, because it is the first version to package the Zig ncdu implementation, so system-level dependencies should be straight forward.

Launch a container and install system-level dependencies

Command to launch a container:

incus launch images:fedora/42/cloud c1

Command to install the required system-level dependencies:

incus exec c1 -- dnf install \
  libzstd-devel \
  ncurses-devel \
  minisign \
  git-core \
  wcurl \
  make \
  gcc

Download the latest ncdu and Zig releases

Command to download Zig 0.15.1 and a signature:

incus exec c1 -- wcurl \
  https://ziglang.org/download/0.15.1/zig-x86_64-linux-0.15.1.tar.xz \
  https://ziglang.org/download/0.15.1/zig-x86_64-linux-0.15.1.tar.xz.minisig

Command to verify the download with minisign against the public key from the Zig download page:

incus exec c1 -- minisign \
    -V \
    -m zig-x86_64-linux-0.15.1.tar.xz \
    -P RWSGOq2NVecA2UPNdBUZykf1CCb147pkmdtYxgb3Ti+JO/wCYvhbAb/U

Expected output:

Signature and comment signature verified
Trusted comment: timestamp:1755707121   file:zig-x86_64-linux-0.15.1.tar.xz     hashed

Command to extract the Zig compiler:

incus exec c1 -- tar xf zig-x86_64-linux-0.15.1.tar.xz

Command to clone the source code for the latest release of ncdu:

incus exec c1 -- git clone --config advice.detachedHead=false \
  https://code.blicky.net/yorhel/ncdu.git --branch v2.9.1

It is safe to ignore the warning below, this is displayed because v2.9.1 is an annotated git tag:

warning: refs/tags/v2.9.1 79a0f4f623adfef4488593c3bbfda21e74f34f5c is not a commit!

Build and install

Command to run the build:

incus exec c1 -- sh -c 'PATH="/root/zig-x86_64-linux-0.15.1/:$PATH" make -C ncdu'

Command to install the resulting binary on the host system:

incus file pull c1/root/ncdu/zig-out/bin/ncdu ~/.local/bin

Clean up

Command to stop and delete the container:

incus stop c1 && incus delete c1

Why install GCC above?

I install GCC to avoid an issue relating to the linker script installed as /usr/lib64/libncursesw.so. Although the issue is closed; I cannot confirm it is resolved because I ran into other issues building ncdu with a Zig nightly version. Unfortunately system-level dependencies weren't as straightforward as I expected.

Detailed error message without GCC installed

Output from make:

make: Entering directory '/root/ncdu'
zig build --release=fast -Dstrip
install
└─ install ncdu
   └─ compile exe ncdu ReleaseFast native 1 errors
error: ld.lld: unable to find library -ltinfo
error: the following command failed with 1 compilation errors:
/root/zig-x86_64-linux-0.15.1/zig build-exe -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 -lncursesw -ltinfo -lzstd -fstrip -OReleaseFast -Mroot=/root/ncdu/src/main.zig -lc --cache-dir .zig-cache --global-cache-dir /root/.cache/zig --name ncdu --zig-lib-dir /root/zig-x86_64-linux-0.15.1/lib/ --listen=-

Build Summary: 0/3 steps succeeded; 1 failed
install transitive failure
└─ install ncdu transitive failure
   └─ compile exe ncdu ReleaseFast native 1 errors

error: the following build command failed with exit code 1:
.zig-cache/o/7ade27cbf6b5118e3c7fe0ce076f4a3f/build /root/zig-x86_64-linux-0.15.1/zig /root/zig-x86_64-linux-0.15.1/lib /root/ncdu .zig-cache /root/.cache/zig --seed 0x1af56ad3 -Z2b12229399a4fdd0 --release=fast -Dstrip
make: *** [Makefile:20: release] Error 1
make: Leaving directory '/root/ncdu

Contents of /usr/lib64/libncursesw.so as described in the issue report:

INPUT(libncursesw.so.6 -ltinfo)

Commentary

At first I ignored the Makefile; but I ran into an error: failed to parse shared library: UnexpectedEndOfFile because I was trying to produce a debug build. Now, I am happy with the approach above.

Installing minisign is perhaps extra complexity. The project does not plan to provide "SHA" hashes; though these are published in https://ziglang.org/download/index.json. The instructions above to verify a download are taken from a closed issue.

It was unfortunate to run into the issue around -ltinfo. I am slightly more positive after this experience with the build system. Zig is an attractive systems programming language and tool chain.


How I use Dnsmasq to resolve test domains

Posted on Wednesday 15 October 2025.

I run an Incus container with Dnsmasq and a specific β€˜A’ record, for example pointing c1.example.keithmaxwell.uk to 127.0.0.1 and forwarding other queries to the Google DNS servers.

Why? So that I can:

  • Test self-hosted services on my local developer workstation,

  • Use Incus and Linux containers for faster feedback on a developer workstation before I begin deploying to production hardware and

  • Become more familiar with OpenWRT. OpenWRT supports a wide range of networking hardware and I anticipate running OpenWRT on the router for my production hardware.

In practice developer workstation means laptop and production hardware means consumer electronics like Raspberry Pis!

An entry in /etc/hosts would serve exactly the same purpose here; with a lot less to go wrong. In the production environment I intend to use OpenWRT; so arguably I should use OpenWRT in this test environment. For me Dnsmasq is a splendid piece of software with a lot of other uses. For example it can be used as an ad blocker. Learning about one way to deploy Dnsmasq, using OpenWRT, has potential beyond that of a line in /etc/hosts.

This rest of this post assumes that Incus is already installed and configured, with resolved integrated.

Launch and configure the OpenWRT container

The first step is to launch and configure the container:

  • to use the Google servers for DNS,
  • to open port 53 in the OpenWRT firewall and
  • to serve a DNS record for c1.example.keithmaxwell.uk.

Below the OpenWRT container will be called o1.

Command to launch o1:

incus launch images:openwrt/24.10 o1 \
&& until incus exec o1 logread 2>/dev/null \
    | grep --quiet -- '- init complete -' ; do printf . && sleep 1; done \
&& incus exec o1 -- uci set network.wan.peerdns=0 \
&& incus exec o1 -- uci set network.wan.dns="8.8.8.8 8.8.4.4" \
&& incus exec o1 -- uci set network.wan6.peerdns=0 \
&& incus exec o1 -- uci set network.wan6.dns="2001:4860:4860::8888 2001:4860:4860::8844" \
&& incus exec o1 -- uci commit network \
&& incus exec o1 -- service network reload \
&& incus exec o1 -- uci add firewall rule \
&& incus exec o1 -- uci set 'firewall.@rule[-1].name=Allow-dnsmasq' \
&& incus exec o1 -- uci set 'firewall.@rule[-1].src=wan' \
&& incus exec o1 -- uci set 'firewall.@rule[-1].dest_port=53' \
&& incus exec o1 -- uci set 'firewall.@rule[-1].proto=udp' \
&& incus exec o1 -- uci set 'firewall.@rule[-1].target=ACCEPT' \
&& incus exec o1 -- uci commit firewall \
&& incus exec o1 -- service firewall reload \
&& incus exec o1 -- uci add_list \
    'dhcp.@dnsmasq[0].address=/c1.example.keithmaxwell.uk/127.0.0.1' \
&& incus exec o1 -- uci commit dhcp \
&& incus exec o1 -- service dnsmasq reload

Use the OpenWRT container for DNS on the host

Commands to point systemd-resolved on the host to o1:

printf 'DNS=%s\n' "$(dig o1.incus +short)" \
| sudo tee -a /etc/systemd/resolved.conf \
&& sudo systemctl restart systemd-resolved

Check a few DNS queries

Commands to query DNS:

dig c1.example.keithmaxwell.uk +short \
&& dig dns.google.com +short

Expected output:

127.0.0.1
8.8.8.8
8.8.4.4

Clean up

Commands to manually remove o1 from the hosts DNS configuration:

sudo $EDITOR /etc/systemd/resolved.conf \
&& sudo systemctl restart systemd-resolved

Commands to clean up:

incus stop o1 \
&& incus delete o1

Troubleshooting

Commands to turn on logging of DNS queries in the container:

incus exec o1 -- uci set dhcp.@dnsmasq\[0\].logqueries=1 \
&& incus exec o1 -- uci commit dhcp \
&& incus exec o1 -- service dnsmasq restart

Command to follow the above logs from the container:

incus exec o1 -- logread -e dnsmasq -f

Please note that there are multiple layers of caching in the Domain Name System. Only queries that are passed to o1 will appear in these logs.

Command to run a DNS query skipping the local systemd-resolved cache:

resolvectl query --cache=no c1.example.keithmaxwell.uk

Explaining the configuration files

The dnsmasq service uses the DNS servers configured for the WAN via /tmp/resolv.conf.d/resolv.conf.auto.

Command to start a shell inside o1:

incus exec o1 sh

Commands to display the dnsmasq command line from the above shell:

tr '\0' ' '

How does Fedora build disk images?

Posted on Sunday 20 July 2025.

Fedora has lots of tools for building disk images in ISO format; for example imagefactory, livecd-tools, lorax, kiwi-cli and image-builder are all currently packaged. I plan to build an image to follow the YubiKey guide and I want to use a popular and maintained tool; ideally I'll use the tool Fedora uses for release artifacts. There is some confusion over which is used for the official Fedora Workstation Live ISO images (β€œISOs”) today.

TLDR; ISOs are built in the Koji build system with a long-running project from openSUSE called KIWI β€” Documentation, GitHub. Look at a specific build to confirm: under logs and then the relevant architecture, root.log shows a call to kiwi-ng which logs to image-root.Β«architectureΒ».log.

That's a very narrow answer; there is more to the topic. How did Fedora build ISOs in the past? Are there changes planned in the future?

Before release 24, in June 2016, Fedora used livecd-tools to build the ISOs. Historically kickstart files were used to specify these release images. Fedora 24 was the first release to use livemedia-creator which is part of Lorax.

In November 2016, livecd-tools started to support Python 3 and switched from yum to dnf. Today livecd-tools has unique features like persistent overlays. There remains some overlap between livecd-tools and Lorax.

Around April 2024 β€” release 40 β€” Fedora began to build additional ISOs with Image Builder. Image Builder is a Red Hat project with support for OSTree. Initially these builds were performed by a separate service, until a change was made for Fedora 43 to run Image Builder inside Koji. Image Builder includes composer-cli and osbuild-composer; for an introduction see this 2021 article in Fedora Magazine. Pungi is the software used to produce all of the artifacts, including the ISOs, for each Fedora release. Fedora stores configuration files for Pungi in pungi-fedora. According to fedora.conf in that repository, today the only thing built with Image Builder is a raw image for aarch64.

In April 2025 β€” Fedora 42 β€” a PR changed the build system for the ISOs to KIWI. The fedora-kiwi-descriptions repository contains the configuration and a table showing the different editions, types and profiles. KIWI doesn't support OSTree.

From related Fedora Discussion threads (1, 2, 3) I gather that in the future Fedora may use Image Builder.


How I double check writing a disk image

Posted on Sunday 6 July 2025.

While I know USB flash drives are unreliable, I still use them as installation media. Depending on the circumstances I use different software to write a disk image to a physical drive. Even if the software includes a check on the written data, I remove the drive from the system and later double check.

I use a separate two step process to double check that data read from the drive matches the disk image:

  1. Count the number of bytes in the image
  2. Read that number of bytes from the drive and generate a checksum

The two step process is necessary because the image file and physical drive are practically never the same size. It is straightforward to use stat, head and sha256sum from GNU coreutils to implement this process.

This example uses ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso as left behind after creating a bootable Fedora Workstation 43 USB.

Command to display the size of the ISO in bytes:

stat --format=%s ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso

Output:

2742190080

Command to read 2,742,190,080 bytes from the drive and then generate checksums for that data and the image file:

sudo head --bytes=2742190080 /dev/sda \
| sha256sum - ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso

Output:

2a4a16c009244eb5ab2198700eb04103793b62407e8596f30a3e0cc8ac294d77  -
2a4a16c009244eb5ab2198700eb04103793b62407e8596f30a3e0cc8ac294d77  /home/maxwell-k/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso

This matches the values in the corresponding checksum file:

# Fedora-Workstation-Live-43-1.6.x86_64.iso: 2742190080 bytes
SHA256 (Fedora-Workstation-Live-43-1.6.x86_64.iso) = 2a4a16c009244eb5ab2198700eb04103793b62407e8596f30a3e0cc8ac294d77

This page has been updated since initial publication to use more recent Fedora Linux images.


How I run Renovate locally

Posted on Wednesday 26 March 2025.

Why? So that feedback is available quickly; so that I can efficiently iterate on Renovate configuration.

… So that I can more easily configure automated dependency updates. Renovate creates pull requests to update dependencies and supports configuration to automatically merge certain updates.

… So that I can efficiently pin and update dependencies in a controlled manner.

… So that I avoid:

  1. Unexpected breakage from incompatible dependencies and
  2. Manual work to keep dependencies up to date and
  3. Becoming β€œstuck” on old out-dated software versions.

I think that Renovate is a great software tool to help keep software dependencies up to date. I use Renovate both locally and via the "Mend Renovate Community Cloud". The rest of this post sets out the steps I use to run Renovate locally.

Why now? I'm publishing this post today because, two days ago, setuptools released a new major version β€” 78 β€” that dropped support for uppercase or dash characters in setup.cfg. This led to discussion and a subsequent release reinstating the earlier behaviour. I am a fan of setuptools, which I have used extensively, and I fully support its maintainers. This was a helpful reminder of the value in pinning dependencies and automating updates. Renovate makes it straightforward to ensure an up to date, pinned, build backend is specified in pyproject.toml.

What? Ensure Renovate can run locally with a suitable version of Node.js and suitable credentials.

Prerequisites:

All of this software apart from Renovate itself can be installed from the system package repositories on Fedora 40.

Command to install pre-requisites on Fedora 40:

sudo dnf install \
    gh \
    jq \
    nodejs-npm \
    python3-keyring

A compatible version of Node.js

Install a version of Node.js that matches the engines key in package.json. Today that is:

"node": "^20.15.1 || ^22.11.0",

Command to show the current node version:

npm version --json | jq --raw-output .node

Example output:

20.18.2

If a suitable version is not available from the system package manager then I recommend fnm.

A GitHub authentication token

Depending upon the repository configuration, if Renovate is run without a GitHub token it will either display a warning or fail. A example warning message is below:

WARN: GitHub token is required for some dependencies (repository=local)

For me, the easiest way to securely retrieve and store an access token for GitHub is to use the command line interface (CLI). The CLI stores a token for its own use in the system keyring. First ensure the CLI is installed.

Command to check status of the token used by gh:

gh auth status --show-token

Command to retrieve the token used by gh:

keyring get gh:github.com ""

A suitable shell command

Command to run Renovate with debugging output:

GITHUB_COM_TOKEN=$(keyring get gh:github.com "") \
LOG_LEVEL=debug \
npm exec --yes renovate -- --platform=local

Exploring rate limiting with NGINX

Posted on Thursday 6 February 2025.

Why? To better understand rate limiting in NGINX; working through this 2017 blog post: https://blog.nginx.org/blog/rate-limiting-nginx.

What? Set up an Ubuntu 20.04 Long Term Support (Focal Fossa) container running NGINX. Load test using the artillery command-line-interface.

Prerequisites:

  1. Incus installed and configured, with a default profile that includes networking and storage.
  2. Local networking configured to integrate Incus and resolved.

A system container

In brief the following steps will use Incus and https://cloud-init.io/ to:

  1. Start a container from an Ubuntu 22.04 Focal Fossa image
  2. Update the local package metadata and upgrade all packages
  3. Install NGINX
  4. Serve an HTML page containing "Hello world"
  5. Rate limit requests to one per second

Contents of config.yaml:

config:
  user.vendor-data: |
    #cloud-config
    package_update: true
    package_upgrade: true
    packages: [nginx]
    write_files:
      - content: |
          limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;

          server {
              listen 80;
              listen [::]:80;

              server_name c1.incus;

              root /var/www/c1.incus;
              index index.html;

              location / {
                  try_files $uri $uri/ =404;
                  limit_req zone=mylimit;
              }
          }
        path: /etc/nginx/conf.d/c1.conf
      - content: |
          <!doctype html>
          <html lang="en-US">
            <head>
              <meta charset="utf-8">
              <title>Hello world</title>
            </head>
            <body>
              <p>Hello world</p>
            </body>
          </html>
        path: /var/www/c1.incus/index.html

Command to launch an Incus container called c1 using the above configuration:

incus launch images:ubuntu/focal/cloud c1 < config.yaml

Load testing software

The latest version of artillery β€” artillery@2.0.22 β€” requires a specific version of Node.js: >= 22.13.0. This can be installed using Fast Node Manager.

Command to install the specific version of Node.js:

fnm install v22.13.1

Command to run the latest artillery:

fnm exec --using=v22.13.1 npm exec --yes artillery@2.0.22 -- --version

Demonstrate the default 503 HTTP response status code

Command to run the test:

fnm exec --using=v22.13.1 npm exec artillery@2.0.22 -- quick http://c1.incus

Partial output:

βœ‚
http.codes.200: ................................................................ 1
http.codes.503: ................................................................ 99
http.downloaded_bytes: ......................................................... 20394
http.request_rate: ............................................................. 100/sec
http.requests: ................................................................. 100
βœ‚

The test ran for 1 second and sent 100 requests per second for a total of 100 requests. 1 response had the 200 HTTP response status code and 99 had the 503 response status code.

Configure and demonstrate another HTTP response status code

Add another directive to the location block:

--- c1.conf
+++ c1.conf
@@ -12,5 +12,6 @@
     location / {
         try_files $uri $uri/ =404;
         limit_req zone=mylimit;
+        limit_req_status 429;
     }
 }

Command to reload the NGINX configuration:

incus exec c1 -- systemctl reload nginx

Partial output from re-running the test with artillery quick:

βœ‚
http.codes.200: ................................................................ 1
http.codes.429: ................................................................ 99
βœ‚

The HTTP response status codes changed from 503 to 429.

Updated 2025-02-11: use a simple test from the command line without a test script.


Configuration as code for DNS

Posted on Wednesday 6 November 2024.

I've wanted to move the DNS configuration for my domain into an open source infrastructure as code solution for some time. The first notes I made on the topic are from 2019!

I started managing keithmaxwell.uk in Route 53 using a web browser. Route 53 is the managed DNS service from Amazon Web Services (AWS). To me, two benefits of an infrastructure as code solution see are: traceability and portability. Portability would help with a move away from AWS to another managed DNS provider.

I'm aware of a range of specialised tools. I have ruled out Terraform because it isn't open source. Below I share some brief notes that I made about the options:

https://github.com/octodns/octodns

  • implemented in Python
  • typical configuration is in YAML
  • documented in the README.md
  • MIT licensed
  • project appears active, originally used at GitHub

https://github.com/AnalogJ/lexicon

  • implemented in Python
  • typically used as a CLI or Python API to manipulate DNS records
  • some links in the online documentation 404
  • MIT licensed
  • project appears active

https://github.com/StackExchange/dnscontrol

  • implemented in Go
  • typical configuration is in a Domain Specific Language (DSL) that is similar to JavaScript
  • detailed documentation including a migration guide
  • MIT licensed
  • project appears active, originated at "StackOverflow / StackExchange"

https://github.com/Netflix/denominator

  • implemented in Java
  • typically used as a CLI or Java API to manipulate DNS records
  • documented in the README.md
  • Apache 2 licensed
  • last commit was eight years ago

https://github.com/pulumi/pulumi-aws

  • implemented in Go
  • supports configuration in Python or JavaScript
  • detailed documentation, for example about Route 53
  • Apache 2 licensed
  • project appears active

https://github.com/opentofu/opentofu

  • implemented in Go
  • typical configuration is in a DSL, also supports JSON configuration
  • detailed documentation
  • MPL 2.0 licensed
  • the project is around a year old and appears to be active

All of the options above support Route 53.

Sometimes a distinction is made between declarative and imperative tools. Viewed that way I'm looking for a declarative tool for this task.

I have used Pulumi for small projects and I have significant experience with the versions of Terraform that OpenTofu was forked from. From that personal experience I expect there will be a requirement to manage state data if adopting Pulumi or Open Tofu.

After reviewing these options I've decided to start with dnscontrol, for three reasons:

  1. The high quality documentation especially the migration guide
  2. The absence of a requirement to manage state and
  3. The apparent health of the open source project.

Serial cable for Raspberry Pi 2 B

Posted on Tuesday 5 November 2024.

I have two or three Raspberry Pi 2 B single board computers. I've had them a long time and they've mostly been gathering dust. I now plan to make use of them. I want to work with them efficiently, so inspired by this 2021 blog post I decided to buy a USB to serial converter. Another popular author and YouTuber has written about the same topic. The serial converter cost about Β£10 and should be delivered in a few days. In doing a little research before the purchase, I looked at the schematic and I learnt:

The remaining pins are all general-purpose 3V3 pins, meaning that the outputs are set to 3.3 volts and the inputs are tolerant of 3.3 volts.

β€”https://www.futurelearn.com/info/courses/robotics-with-raspberry-pi/0/steps/75878

I also came across a forum post with reassuring, beginner friendly, explanation about serial communication.

I have ordered a couple of cases too.


How I install soft-serve on Debian

Posted on Sunday 29 September 2024.

This is a simple deployment of soft-serve on Debian 12 Bookworm using Incus. Eventually I will install this service onto hardware running Debian directly. At this stage Incus is a great way to experiment in disposable system containers.

In case you aren't already aware system containers, as implemented by LXD and Incus, simulate a full operating system. This is in contrast to the single process typically packaged in a Docker, Podman or Kubernetes container. Here I'm going to configure and test a systemd service so Incus is a good fit.

One extra piece of complexity is that I use Cog and Python to get up to date public SSH keys from GitHub.

Pre-requisites: curl, GPG, Incus and the Incus / systemd-resolved integration.

Process

Command to download the GPG key and remove the base 64 encoding:

curl -s https://repo.charm.sh/apt/gpg.key \
| gpg --dearmor -o charm.gpg

Save the following text as ./charm.sources:

Types: deb
URIs: http://repo.charm.sh/apt/
Suites: *
Components: *
Signed-By: /etc/apt/keyrings/charm.gpg

Save the following as soft-serve.conf:

# Based upon https://github.com/charmbracelet/soft-serve/blob/main/.nfpm/soft-serve.conf
# vim: set ft=conf.cog :
#
# [[[cog
# import urllib.request
# f = urllib.request.urlopen("https://github.com/maxwell-k.keys")
# cog.outl(f"SOFT_SERVE_INITIAL_ADMIN_KEYS='{f.read().decode().strip()}'")
# ]]]
SOFT_SERVE_INITIAL_ADMIN_KEYS='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2ey56D7MlKkZXZZPu6vY1Y/f5KM8vQ8gghiWCbQlUkLlJAXWEKzPymU3FRSJO8EkrNvHw+7DlMizhpjOLyfSNKfxbRkbs/3DYUd7mg5Y/a2z+EMDL975mNxkd7PFwjnDF0MFXnfuVYUqCLZMNoUyVRE8sZUuVgrkVWeME9Wqqh/69v4W//V5ImjqxCFXnI73ATrot0I1hRDPM339TW/EVMakxBjyutYW5/W7bWCu1nEu7T3SZrQZLrVNrp2FHL9cy4Dl9iwyL0Jhp72o9NiaKjRUZqM9OGz5dGRZ3ALmPddqLJP6PUAPaLRPl14ef09ErXmQFn27RNT2zj3IJK5NF'
# [[[end]]]

Command to launch a container and run soft-serve:

incus launch images:debian/12 c1 \
&& incus exec c1 -- sh -c "until systemctl is-system-running >/dev/null 2>&1 ; do : ; done" \
&& incus exec c1 -- apt-get update \
&& incus exec c1 -- apt-get upgrade \
&& incus exec c1 -- apt-get install --yes ca-certificates \
&& incus file push charm.gpg c1/etc/apt/keyrings/charm.gpg \
&& incus file push charm.sources c1/etc/apt/sources.list.d/charm.sources \
&& incus exec c1 -- apt-get update \
&& incus exec c1 -- apt-get install --yes soft-serve \
&& incus file push soft-serve.conf c1/etc/soft-serve.conf \
&& incus exec c1 -- systemctl enable --now soft-serve.service

Command to display user information:

ssh -p 23231 c1.incus info

Expected output:

Username: admin
Admin: true
Public keys:
  ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2ey56D7MlKkZXZZPu6vY1Y/f5KM8vQ8gghiWCbQlUkLlJAXWEKzPymU3FRSJO8EkrNvHw+7DlMizhpjOLyfSNKfxbRkbs/3DYUd7mg5Y/a2z+EMDL975mNxkd7PFwjnDF0MFXnfuVYUqCLZMNoUyVRE8sZUuVgrkVWeME9Wqqh/69v4W//V5ImjqxCFXnI73ATrot0I1hRDPM339TW/EVMakxBjyutYW5/W7bWCu1nEu7T3SZrQZLrVNrp2FHL9cy4Dl9iwyL0Jhp72o9NiaKjRUZqM9OGz5dGRZ3ALmPddqLJP6PUAPaLRPl14ef09ErXmQFn27RNT2zj3IJK5NF

Commands to import an example repository:

ssh -p 23231 c1.incus repository import dotfiles https://github.com/maxwell-k/dotfiles

Command to connect interactively:

ssh -p 23231 c1.incus

Decisions

Decided to use https for the apt repository

HTTP is sometimes preferred for apt package distribution so that package data can be cached. For this repository HTTP redirects to HTTPS; so it is necessary to use HTTPS. Using HTTPS here means that an extra step installing the ca-certificates package is required.

Keyring is stored in β€˜/etc/apt/keyrings’

The recommended locations for keyrings are /usr/share/keyrings for keyrings managed by packages, and /etc/apt/keyrings for keyrings managed by the system operator.

-- https://manpages.debian.org/unstable/apt/sources.list.5.en.html

References

After writing most of this post I found a blog post from an engineer at the company behind soft serve; it covers similar material to this post.


First post at the PyBelfast workshop

Posted on Wednesday 18 September 2024.

I created this repository at a local meetup. In this post I am loosely following the instructions provided by our host Kyle. I did a few things differently and I try to document my rationale here.

Use a new directory

For what its worth; I think that it is important to work in a new directory; to treat this workshop as a separate project.

Commands to create a new directory for today's workshop, set it as the current working directory and set up an empty git repository:

mkdir --parents ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& cd ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& git init \
&& git branch -m main

Use β€˜uv tool run’

In my experience running entry-level Python workshops, initial setup is always time consuming. Especially installing an appropriate version of Python, possibly setting up a virtual environment and obtaining the correct libraries. Being able to help attendees who may be using Windows, Mac or Linux is challenging. This is both one of the hardest parts of a session and one of the first!

I tried to side step some of the issues here by using uv. Most of the group used Rye and my neighbour was unsure. Trying to help I suggested using pipx to install Pelican. I had started out using pipx. However first you need to install pipx; the pipx install instructions for Windows suggest using Scoop; that means you need the installation instructions for Scoop… it was turtles all of the way down. The neighbour was confident with Conda so I left them to it.

In the end I preferred uv tool run over pipx for a couple of reasons:

  1. The uv installation instructions for Windows only use PowerShell and Scoop isn't necessary.

  2. uv tool run supports specifying additional packages using --with; which will be relevant in the next section.

Command to run the quick-start:

uv tool run "--from=pelican[markdown]" pelican-quickstart

Many of the default answers where fine; a couple I defined are:

What is your time zone? [Europe/Rome] Europe/London

Do you want to generate a tasks.py/Makefile to automate generation and publishing? (Y/n) n

Use YAML metadata

I want to use YAML metadata because it is well supported by my editor configuration. It is also supported by the yaml-metadata plugin. At the minute it is possible to just use a pipx run --spec=pelican-yaml-metadata pelican command because the plugin depends on everything necessary. However I prefer the more transparent approach below.

Command to create a directory to address a warning and run the site locally:

uv tool run --with-requirements=requirements.txt pelican --autoreload --listen

Then browse to http://127.0.0.1:8000/.

The command above may output a warning:

[23:12:13] WARNING Unable to watch path '/home/maxwell-k/github.com/maxwell-k/2024-09-18-pybelfast-workshop/content/images' as it does not exist. utils.py:843

Commands to address the warning:

mkdir --parents content/images \
&& touch content/images/.keep

Use the official GitHub actions workflow

I adopted the official workflow β€” https://github.com/getpelican/pelican/blob/main/.github/workflows/github_pages.yml. A helpful feature of this workflow is that SITEURL will "default to the URL of your GitHub Pages site, which is correct in most cases." Using this official workflow also allows me to remove publishconf.py.

Initially this workflow produced the following error:

Branch "main" is not allowed to deploy to github-pages due to environment protection rules.

To resolve this I configured permissions: go to β€˜Settings’, then β€˜Environments’, then β€˜github-pages’ and make sure β€˜main’ can deploy to this environment.

Allowing manually running the workflow by adding workflow_dispatch: is helpful for testing the repository settings.