This post feels like a throw back to the late 1990s. I'm publishing it because
the only straightforward instructions that I can find are for Debian / Ubuntu.
I wanted to check the appearance of this blog while working on the CSS. I was
looking at layout-shifts and the PageSpeed Insights score and I wanted to
check the appearance with a standard, default font. Google Chrome on this
version of Fedora Linux depends on the Liberation fonts. I understand that
Times New Roman is both very common and the default serif font for most
browsers. So I wanted to check the appearance of this blog with Times New Roman.
In the late 1990s, Times New Roman along with Arial, Courier New, Webdings (!)
and other fonts, were published under a license on https://micrososoft.com.
The license permits redistributing the fonts in their original form, so the
original .exe files are now mirrored on SourceForge. Debian publishes a
package, msttcorefonts, to install the fonts from these .exe files as .ttf files
to the local file system. The rest of this post demonstrates obtaining the
.ttf files this way using an Incus container; the same approach works for
Arial and the other fonts.
Commands to create an Incus container, prompt to accept the EULA, install the
Microsoft fonts, copy them to the host and then clean up the container:
Sometimes; other times I prefer johnnydep or pipdeptree. This post discusses
the advantages of each from my perspective.
I recently started using pymupdf. First impressions are that it is a very
capable AGPL-3.0 Python library for analysing and extracting information from
PDFs. Before committing to pymupdf I wanted to understand βwhat dependencies
does pymupdf bring inβ?
Command to display the pymupdf dependencies:
uv tool run johnnydep --verbose=0 pymupdf
Output:
name summary
------- ------------------------------------------------------------------------------------------------------------------------
pymupdf A high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.
Brilliant. No other dependencies.
This is an example of a common question I ask when I work with Python: βwhat
dependencies does X bring inβ? Often X is an open source package from PyPI;
sometimes its a proprietary package from elsewhere.
The answer for pymupdf is very simple: none. Another package that I looked at
recently β gkeepapi β gives a less simple answer. That's a better illustration
for the rest of this discussion.
What dependencies does gkeepapi bring in?
Command to display the gkeepapi dependencies:
uv tool run johnnydep --verbose=0 gkeepapi
Output:
name summary
------------------------------------ -------------------------------------------------------------------------------------------------------
gkeepapi An unofficial Google Keep API client
βββ future>=0.16.0 Clean single-source support for Python 3 and 2
βββ gpsoauth>=1.1.0 A python client library for Google Play Services OAuth.
βββ pycryptodomex>=3.0 Cryptographic library for Python
βββ requests>=2.0.0 Python HTTP for Humans.
β βββ certifi>=2017.4.17 Python package for providing Mozilla's CA Bundle.
β βββ charset_normalizer<4,>=2 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.
β βββ idna<4,>=2.5 Internationalized Domain Names in Applications (IDNA)
β βββ urllib3<3,>=1.21.1 HTTP library with thread-safe connection pooling, file post, and more.
βββ urllib3>=1.26.0 HTTP library with thread-safe connection pooling, file post, and more.
A critical reader might point out that I used two tools in the
examples above, I used both (1) uv and (2) johnnydep. Why did I choose to
use two tools?
I've used johnnydep for a long time and I love the simplicity of its interface.
The default tool that I reach for when working with Python packages is uv.
Initially I adopted uv because of its speed; it also has a very active and
encouraging maintainer team.
For investigating dependencies, uv has a tree subcommand that requires either a project configured with a
pyproject.toml file or a script with inline dependency metadata.
The third option that I will sometimes reach for is pipdeptree.
Pipdeptree works with installed Python packages, for example from a virtual
environment. Analysing installed packages is often a huge benefit when
working with proprietary software. Installing packages from a company's
infrastructure is typically a solved problem. Analysing installed packages
avoids integrating the analysis tool with source control or company
infrastructure like an internal package repository.
Pipdeptree can output visualisations via GraphViz and I have found that
graphical output invaluable. I have incorporated it into both written material and
presentations to stakeholders. Visualising dependency relationships as a graph
can really help with communication.
Uv's tree subcommand and the name pipdeptree both suggest working with trees.
A property of a tree is that it is acyclic, in other words it does not contain
any loops. Unfortunately not every Python dependency graph is acyclic.
Professionally, I've worked with sets of twenty or thirty proprietary packages
that include cycles in their dependencies graphs. One package depends on another
that in turn depends on the first. I recommend avoiding cycles. They can
surprise developers for example requiring coordination when releasing new
versions. If cycles are unavoidable then ensuring they are well understood with
tools like pipdeptree and GraphViz helps.
Pipdeptree also shows any dependency ranges specified in package metadata and a
number of warnings. Both can be very helpful when debugging packaging or
installation issues.
I appreciate that I've introduced another tool above β virtualenv. This is to
avoid a warning from pipdeptree. I'll go into more detail on that warning in a
follow up post.
To recap, when I'm thinking βwhat dependencies does X bring inβ I reach for:
johnnydep if X is straightforward or if X is on PyPI or
uv tree if the dependency on X is already or easily codified in inline
script metadata or pyproject.toml or
pipdeptree if X is proprietary, if I want to visualise the dependency
graph or if I want detailed information on version ranges.
In resolving an error running an Incus container on GitHub Actions, I recently
learnt about Cloud-init base configuration. This post describes the error, a
solution and behaviour with user-data that I found unintuitive.
To make integration tests running on GitHub Actions more portable I often use
Incus. Recently launching an images:fedora/43/cloud container began to fail
with an error "Failed to set the hostnameβ¦". The Cloud-init logs didn't help
identify a root cause.
Excerpt from /var/log/cloud-init.log
2025-11-30 13:14:38,815 - subp.py[DEBUG]: Running command ['hostnamectl', 'set-hostname', 'c1'] with allowed return codes [0] (shell=False, capture=True)
2025-11-30 13:14:38,820 - log_util.py[WARNING]: Failed to set the hostname to c1 (c1)
2025-11-30 13:14:38,820 - log_util.py[DEBUG]: Failed to set the hostname to c1 (c1)
Traceback (most recent call last):
File "/usr/lib/python3.14/site-packages/cloudinit/config/cc_set_hostname.py", line 86, in handle
cloud.distro.set_hostname(hostname, fqdn)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/site-packages/cloudinit/distros/__init__.py", line 392, in set_hostname
self._write_hostname(writeable_hostname, self.hostname_conf_fn)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/site-packages/cloudinit/distros/rhel.py", line 119, in _write_hostname
subp.subp(["hostnamectl", "set-hostname", str(hostname)])
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/site-packages/cloudinit/subp.py", line 291, in subp
raise ProcessExecutionError(
stdout=out, stderr=err, exit_code=rc, cmd=args
)
cloudinit.subp.ProcessExecutionError: Unexpected error while running command.
Command: ['hostnamectl', 'set-hostname', 'c1']
Exit code: 1
Reason: -
Stdout:
Stderr: Failed to connect to system scope bus via local transport: No such file or directory
2025-11-30 13:14:38,822 - main.py[DEBUG]: Failed setting hostname in local stage. Will retry in network stage. Error: Failed to set the hostname to c1 (c1): Unexpected error while run
Command: ['hostnamectl', 'set-hostname', 'c1']
Exit code: 1
Reason: -
Stdout:
Stderr: Failed to connect to system scope bus via local transport: No such file or directory.
The integration tests in question did not depend upon the hostname so I disabled
the calls to hostnamectl. There are two related Cloud-init modules that can
call hostnamectl: Set Hostname and Update Hostname. Both accept a
configuration option:
preserve_hostname: (boolean) If true, the hostname will not be changed.
Default: false.
With preserve_hostname: true in the base configuration in
/etc/cloud/cloud.cfg.d/*.cfg, Cloud-init does not run hostnamectl.
Contents of 99-preserve-hostname.cfg:
preserve_hostname:true
Command to launch a container with a custom base configuration:
2025-11-30 18:53:11,841 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,454 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,501 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:53:12,502 - cc_update_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not updating the hostname in module update_hostname
This solution worked! A number of other potential solutions didn't. Disabling
AppArmour as suggested by a forum post didn't help.
Reading the Cloud-init documentation about specifying configuration, user-data
appears to be the appropriate place for an end user like me to specify
preserve_hostname. Unfortunately after setting preserve_hostname
in user-data, Cloud-init still calls hostnamectl.
Command to launch a container with preserve_hostname set in user-data:
2025-11-30 18:59:51,377 - subp.py[DEBUG]: Running command ['hostnamectl', 'set-hostname', 'c1'] with allowed return codes [0] (shell=False, capture=True)
2025-11-30 18:59:51,447 - performance.py[DEBUG]: Running ['hostnamectl', 'set-hostname', 'c1'] took 0.070 seconds
2025-11-30 18:59:51,712 - cc_set_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not setting the hostname in module set_hostname
2025-11-30 18:59:51,713 - cc_update_hostname.py[DEBUG]: Configuration option 'preserve_hostname' is set, not updating the hostname in module update_hostname
The above log excerpts show that early in the Cloud-init run hostnamectl is
called. They also show that later Cloud-init recognises the preserve_hostname
configuration option and does not set the hostname. I found this unintuitive.
Perhaps that is just an admission of the limits of my understanding of
Cloud-init.
This investigation was a reminder that Cloud-init is complex. I can also think
of many adjectives with more positive connotations for Cloud-init: powerful,
flexible, widely adoptedβ¦
This post starts with an explanation of the .apk file format from Alpine Linux.
After that I demonstrate how the explanation matches an example file and I calculate
checksums to match the package repository index. This .apk format is not the
file format used by Android. Alpine Package Keeper is the name of the package
manager for Alpine Linux, typically abbreviated apk.
A .apk file contains three deflate compressed gzip streams. Each gzip stream
contains data in tar format. In order:
Stream
Contents
End of file marker
Demonstration file name
1
Signature for stream 2
No
1.tar.gz
2
Metadata including .PKGINFO
No
control.tar.gz
3
Files to be installed
Yes
data.tar.gz
To prepare that summary table I looked into the process for creating a .apk with
abuild, Alpine Linux's build tool. The abuild repository includes abuild-sign.
To create a .apk file:
abuild creates data.tar.gz; this gzip stream is stream 3
γ creates a tar file containing metadata
γ calls abuild-tar --cut to remove the end of file marker
γ calls gzip on the result; this gzip stream is stream 2
γ calls abuild-sign on stream 2
abuild-sign creates a signature for stream 2 using a private key
γ adds that signature to another tar file
γ removes the end of file marker
γ compresses the result with gzip; this gzip stream is stream 1
γ prepends stream 1 to stream 2
abuild prepends the result, streams 1 and 2, to stream 3
The result is a .apk file made up of the three streams in order!
The most relevant part of abuild is from line 1894 onwards showing how stream
2 is created, abuild-sign is called and then streams 1 and 2 are prepended to
stream 3:
The tar format was originally developed for archiving files to magnetic
tape storage. The end of an archive is marked with zeroes as an end of file
marker. These markers were necessary because the tapes did not use a file system
or other metadata. The end of a tar file on a disk is implied from other
metadata. The apk spec terms tar archives without end of file markers βtar
segmentsβ.
Wikipedia explains that a gzip stream can only compress a single file. If
three streams are concatenated and then decompressed the output is a single
file.
In constructing .apk files the end of file markers are removed from the streams
1 and 2. Stream 3 has an end of file marker. If the three streams in a .apk
file are decompressed together the result is a tar file with a single end of
file marker. Files can therefore be extracted from a .apk file as if it were a
single .tar.gz file.
Examining an example file
The gzip format is specified in RFC1952. βSection 2.3.1. Member header and
trailerβ shows that each stream should start with three bytes:
31 for ID1
139 for ID2
8 for the deflate Compression Method (CM)
Searching for these three bytes inside an example .apk file will help confirm
the explanation above. This example uses the apk-tools-static package from the
3.22 release of Alpine Linux; latest-stable at the time of writing.
fetch_url.py
"""Fetch information about a package from APKINDEX."""importgzipimporttarfilefrombinasciiimporta2b_base64,hexlifyfromioimportBytesIOfromsysimportargvfromurllib.requestimporturlopenREPOSITORY="https://dl-cdn.alpinelinux.org/alpine/v3.22/main"ARCHITECTURE="x86_64"_FIELD="C:"_SHA1="Q1"_APKINDEX_URL=f"{REPOSITORY}/{ARCHITECTURE}/APKINDEX.tar.gz"def_main()->int:iflen(argv)==2:package=argv[1]else:package="apk-tools-static"block=_get_block(_apkindex(),package)line=_get_line(block,_FIELD)print(_get_url(block,package))print(line)base64=line.removeprefix(_FIELD+_SHA1)print(hexlify(a2b_base64(base64)).decode())return0def_get_line(block:str,prefix:str)->str:return[iforiinblock.splitlines()ifi.startswith(prefix)][0]def_get_field(block,prefix:str)->str:return_get_line(block,prefix).removeprefix(prefix)def_get_url(block:str,package:str)->str:version=_get_field(block,"V:")returnf"{REPOSITORY}/{ARCHITECTURE}/{package}-{version}.apk"def_get_block(apkindex:str,package:str)->str:blocks=_apkindex().strip().split("\n\n")returnnext(filter(lambdai:_get_field(i,"P:")==package,blocks))def_apkindex()->str:withurlopen(_APKINDEX_URL)asresponse:compressed_data=response.read()compressed_stream=BytesIO(compressed_data)withgzip.open(compressed_stream,"rb")asgz,tarfile.open(fileobj=gz)astar:fileobj=tar.extractfile("APKINDEX")iffileobjisNone:return""withfileobjasfile:content=file.read()returncontent.decode()if__name__=="__main__":raiseSystemExit(_main())
Command do display a URL for an example file; with a checksum from the
APKINDEX displayed twice; the second time as hexadecimal:
"""Calculate a checksum line to match APKINDEX from a .apk file."""frombase64importb64encodefromhashlibimportsha1frompathlibimportPathfromsysimportargvHEADER=bytes([31,139,8])PREFIX="C:Q1"def_main()->int:iflen(argv)!=2:print("No filename provided.")return1file=Path(argv[1])withfile.open("rb")asfile:data=file.read()control_start=data.find(HEADER,len(HEADER))data_start=data.rfind(HEADER)checksum=sha1()checksum.update(data[control_start:data_start])print(PREFIX+b64encode(checksum.digest()).decode())return0# ruff: noqa: S324 Alpine Linux uses SHA1 in APKINDEXif__name__=="__main__":raiseSystemExit(_main())
Command to generate a checksum line from the downloaded file:
python hash.py apk-tools-static-2.14.9-r3.apk
Output:
C:Q1a98grx1S3fI18wuhEHZPelGxtPo=
split.py
"""Split up a .apk file."""frompathlibimportPathfromsysimportargvHEADER=bytes([31,139,8])def_main()->int:iflen(argv)!=2:print("No filename provided.")return1file=Path(argv[1])withfile.open("rb")asfile:data=file.read()control_start=data.find(HEADER,len(HEADER))data_start=data.rfind(HEADER)Path("1.tar.gz").write_bytes(data[:control_start])Path("control.tar.gz").write_bytes(data[control_start:data_start])Path("data.tar.gz").write_bytes(data[data_start:])return0# ruff: noqa: S324 Alpine Linux uses SHA1if__name__=="__main__":raiseSystemExit(_main())
Command to split up an apk file into three:
python split.py apk-tools-static-2.14.9-r3.apk
Shell session showing the contents of the three files:
% tar tf 1.tar.gz
.SIGN.RSA.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub
% tar tf control.tar.gz
.PKGINFO
% tar --warning=no-unknown-keyword -tf data.tar.gz
sbin/
sbin/apk.static
sbin/apk.static.SIGN.RSA.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub
sbin/apk.static.SIGN.RSA.sha256.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub
% sha1sum control.tar.gz
6bdf20af1d52ddf235f30ba110764f7a51b1b4fa control.tar.gz
This above is the hexadecimal encoding of the same checksum that is encoded with
base64 and prefixed with C:Q1 in the APKINDEX. This matches the output from
./fetch_url.py above.
Shell session to demonstrate the datahash field from .PKGINFO in control.tar.gz:
Because for the first time in ages I'm enthusiastic about Alpine Linux.
The first Linux distribution I ever used was Slackware. Then for a long time
Gentoo. Back in 2017β2020 I was full of enthusiasm for Alpine Linux. Alpine
Linux was originally based on Gentoo. I was a fan; even a contributor. After
that I drifted away; I only use Alpine Linux occasionally and my contributions
stopped.
I recently read a blog post by Filippo Valsorda, who maintains the Go
cryptography standard library. Filippo writes about running a Linux based
Network Attached Storage device from RAM; a topic I hope to revisit. He
described Alpine Linux as:
a simple, well-packaged, lightweight, GNU-less Linux distribution
I also recently read about Chimera Linux which has a FreeBSD user land and build recipes
written in Python. It tries to innovate and overall:
[Chimera Linux] wants to be simple and grokkable, but also practical and unassuming.
Why am I talking about Chimera Linux? Because it also uses apk-tools from
Alpine Linux. Version 3 of apk-tools is in the final stages of testing before
a stable release as of July 2025. I am in the middle of setting up my own
hardware running Alpine Linux for the first time in at least five years and I
hope to post on the topic again soon.
I have an old Linux system that I intend to backup and then update. Before
performing a manual backup I like to understand disk usage. I have used a few
different tools for this task:
This year ncdu is 18 years old. A few years ago I read about a rewrite. I
was a fan of the original version. Version 2 of ncdu is implemented in Zig.
The only Zig software that I regularly use today is Ghostty; and I use Ghostty
on both Mac and Linux.
I'm interested in Zig the language and Zig the build system. For example I found
the case study of Uber using the Zig tool chain for cross compilation
interesting. The Zig website states:
Not only can you write Zig code instead of C or C++ code, but you can use Zig
as a replacement for autotools, cmake, make, scons, ninja, etc.
Much more than the alternatives mentioned, I'm interested in learning more about
the Zig build system and this post is a chance to try it out. Below I use a
Fedora Linux 42 container to build the latest release of ncdu from source. I
chose Fedora Linux 42, released on 15 April 2025, because it is the first
version to package the Zig ncdu implementation, so system-level dependencies
should be straight forward.
Launch a container and install system-level dependencies
Command to launch a container:
incus launch images:fedora/42/cloud c1
Command to install the required system-level dependencies:
I install GCC to avoid an issue
relating to the linker script installed as /usr/lib64/libncursesw.so. Although
the issue is closed; I cannot confirm it is resolved because I ran into other
issues building ncdu with a Zig nightly version. Unfortunately system-level
dependencies weren't as straightforward as I expected.
Contents of /usr/lib64/libncursesw.so as described in the issue report:
INPUT(libncursesw.so.6 -ltinfo)
Commentary
At first I ignored the Makefile; but I ran into an
error: failed to parse shared library: UnexpectedEndOfFile because I was
trying to produce a debug build. Now, I am happy with the approach above.
It was unfortunate to run into the issue around -ltinfo. I am slightly more
positive after this experience with the build system. Zig is an attractive
systems programming language and tool chain.
I run an Incus container with Dnsmasq and a specific βAβ record, for example
pointing c1.example.keithmaxwell.uk to 127.0.0.1 and forwarding other queries
to the Google DNS servers.
Why? So that I can:
Test self-hosted services on my local developer workstation,
Use Incus and Linux containers for faster feedback on a developer workstation
before I begin deploying to production hardware and
Become more familiar with OpenWRT. OpenWRT supports a wide range of networking
hardware and I anticipate running OpenWRT on the router for my production
hardware.
In practice developer workstation means laptop and production hardware means
consumer electronics like Raspberry Pis!
An entry in /etc/hosts
would serve exactly the same purpose here; with a lot less to go wrong. In the
production environment I intend to use OpenWRT; so arguably I should use OpenWRT
in this test environment. For me Dnsmasq is a splendid piece of software with
a lot of other uses. For example it can be used as an ad blocker. Learning
about one way to deploy Dnsmasq, using OpenWRT, has potential beyond that of a
line in /etc/hosts.
This rest of this post assumes that Incus is already installed and configured,
with resolvedintegrated.
Launch and configure the OpenWRT container
The first step is to launch and configure the container:
to use the Google servers for DNS,
to open port 53 in the OpenWRT firewall and
to serve a DNS record for c1.example.keithmaxwell.uk.
Fedora has lots of tools for building disk images in ISO format; for example
imagefactory,
livecd-tools,
lorax,
kiwi-cli and
image-builder
are all currently packaged. I plan to build an image to follow the YubiKey
guide and I want to use a popular and maintained tool; ideally I'll use the
tool Fedora uses for release artifacts. There is some confusion over which is
used for the official Fedora Workstation Live ISO images (βISOsβ) today.
TLDR; ISOs are built in the Koji build system with a long-running project
from openSUSE called KIWI β Documentation, GitHub. Look at a specific
build to confirm: under logs and then the relevant architecture, root.log
shows a call to kiwi-ng which logs to image-root.Β«architectureΒ».log.
That's a very narrow answer; there is more to the topic. How did Fedora build
ISOs in the past? Are there changes planned in the future?
Before release 24, in June 2016, Fedora used livecd-tools to build the ISOs.
Historically kickstart files were used to specify these release images. Fedora
24 was the first release to use livemedia-creator which is part of Lorax.
In November 2016,
livecd-tools started to support Python 3 and switched from yum to dnf.
Today livecd-tools has unique features like persistent overlays. There remains
some overlap between livecd-tools and Lorax.
Around April 2024 β release 40 β Fedora began to build additional ISOs with
Image Builder. Image Builder is a Red Hat project with support for OSTree.
Initially these builds were performed by a separate service, until a
change was made
for Fedora 43 to run Image Builder inside Koji. Image Builder includes
composer-cli and osbuild-composer; for an introduction see this 2021
article in Fedora Magazine. Pungi is the software used to produce all of the
artifacts, including the ISOs, for each Fedora release. Fedora stores
configuration files for Pungi in pungi-fedora. According to fedora.conf in
that repository, today the only thing built with Image Builder is a raw image
for aarch64.
In April 2025 β Fedora 42 β a PR changed the build system for the ISOs to
KIWI. The fedora-kiwi-descriptions repository contains the configuration and a
table showing the different editions, types and profiles. KIWI doesn't
support OSTree.
From related Fedora Discussion threads
(1,
2,
3)
I gather that in the future Fedora may use Image Builder.
While I know USB flash drives are unreliable, I still use them as
installation media. Depending on the circumstances I use different software to
write a disk image to a physical drive. Even if the software includes a check on
the written data, I remove the drive from the system and later double check.
I use a separate two step process to double check that data read from the drive
matches the disk image:
Count the number of bytes in the image
Read that number of bytes from the drive and generate a checksum
The two step process is necessary because the image file and physical drive are
practically never the same size. It is straightforward to use stat, head and
sha256sum from GNU coreutils to
implement this process.
This example uses ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso as
left behind after creating a bootable Fedora Workstation 43 USB.
Command to display the size of the ISO in bytes:
stat --format=%s ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso
Output:
2742190080
Command to read 2,742,190,080 bytes from the drive and then generate checksums
for that data and the image file:
sudo head --bytes=2742190080 /dev/sda \
| sha256sum - ~/Downloads/Fedora-Workstation-Live-43-1.6.x86_64.iso
Why? So that feedback is available quickly; so that I can efficiently
iterate on Renovate configuration.
β¦ So that I can more easily configure automated dependency updates. Renovate
creates pull requests to update dependencies and supports configuration to
automatically merge certain updates.
β¦ So that I can efficiently pin and update dependencies in a controlled manner.
β¦ So that I avoid:
Unexpected breakage from incompatible dependencies and
Manual work to keep dependencies up to date and
Becoming βstuckβ on old out-dated software versions.
I think that Renovate is a great software tool to help keep software
dependencies up to date. I use Renovate both locally and via the "Mend Renovate
Community Cloud". The rest of this post sets out the steps I use to run Renovate
locally.
Why now? I'm publishing this post today because, two days ago, setuptools
released a new major version β 78 β that dropped support for uppercase or dash
characters in setup.cfg. This led to discussion and a subsequent release
reinstating the earlier behaviour. I am a fan of setuptools, which I have used
extensively, and I fully support its maintainers. This was a helpful reminder of
the value in pinning dependencies and automating updates. Renovate makes it
straightforward to ensure an up to date, pinned, build backend is specified in
pyproject.toml.
What? Ensure Renovate can run locally with a suitable version of Node.js and
suitable credentials.
Prerequisites:
All of this software apart from Renovate itself can be installed from the system
package repositories on Fedora 40.
Install a version of Node.js that matches the engines key in package.json.
Today that is:
"node": "^20.15.1 || ^22.11.0",
Command to show the current node version:
npm version --json | jq --raw-output .node
Example output:
20.18.2
If a suitable version is not available from the system package manager then I
recommend fnm.
A GitHub authentication token
Depending upon the repository configuration, if Renovate is run without a GitHub
token it will either display a warning or fail. A example warning message is
below:
WARN: GitHub token is required for some dependencies (repository=local)
For me, the easiest way to securely retrieve and store an access token for
GitHub is to use the command line interface (CLI). The CLI stores a token for
its own use in the system keyring. First ensure the CLI is installed.
The latest version of artillery β
artillery@2.0.22 β requires a specific version of Node.js: >= 22.13.0. This
can be installed using Fast Node Manager.
Command to install the specific version of Node.js:
The test ran for 1 second and sent 100 requests per second for a total of 100
requests. 1 response had the 200 HTTP response status code and 99 had the 503
response status code.
Configure and demonstrate another HTTP response status code
I've wanted to move the DNS configuration for my domain into an open source
infrastructure as code solution for some time. The first notes I made on the
topic are from 2019!
I started managing keithmaxwell.uk in Route 53 using a web browser. Route 53
is the managed DNS service from Amazon Web Services (AWS). To me, two benefits
of an infrastructure as code solution see are: traceability and portability.
Portability would help with a move away from AWS to another managed DNS
provider.
I'm aware of a range of specialised tools. I have ruled out Terraform because
it isn't open source. Below I share some brief notes that I made about the
options:
the project is around a year old and appears to be active
All of the options above support Route 53.
Sometimes a distinction is made between declarative and imperative tools.
Viewed that way I'm looking for a declarative tool for this task.
I have used Pulumi for small projects and I have significant experience with the
versions of Terraform that OpenTofu was forked from. From that personal
experience I expect there will be a requirement to manage state data if adopting
Pulumi or Open Tofu.
After reviewing these options I've decided to start with dnscontrol, for
three reasons:
The high quality documentation especially the migration guide
I have two or three Raspberry Pi 2 B single board computers. I've had them a
long time and they've mostly been gathering dust. I now plan to make use of
them. I want to work with them efficiently, so inspired by this 2021 blog post
I decided to buy a USB to serial converter. Another popular author and
YouTuber has written about the same topic. The serial converter cost about Β£10
and should be delivered in a few days. In doing a little research before the
purchase, I looked at the schematic and I learnt:
The remaining pins are all general-purpose 3V3 pins, meaning that the outputs
are set to 3.3 volts and the inputs are tolerant of 3.3 volts.
This is a simple deployment of soft-serve on Debian 12 Bookworm using Incus.
Eventually I will install this service onto hardware running Debian directly. At
this stage Incus is a great way to experiment in disposable system containers.
In case you aren't already aware system containers, as implemented by LXD and
Incus, simulate a full operating system. This is in contrast to the single
process typically packaged in a Docker, Podman or Kubernetes container. Here I'm
going to configure and test a systemd service so Incus is a good fit.
One extra piece of complexity is that I use Cog and Python to get up to date
public SSH keys from GitHub.
Pre-requisites: curl, GPG, Incus and the Incus / systemd-resolvedintegration.
Process
Command to download the GPG key and remove the base 64 encoding:
HTTP is sometimes preferred for apt package distribution so that package data
can be cached. For this repository HTTP redirects to HTTPS; so it is necessary
to use HTTPS. Using HTTPS here means that an extra step installing the
ca-certificates package is required.
Keyring is stored in β/etc/apt/keyringsβ
The recommended locations for keyrings are /usr/share/keyrings for keyrings
managed by packages, and /etc/apt/keyrings for keyrings managed by the system
operator.
I created this repository at a local meetup. In this post I am loosely
following the instructions provided by our host Kyle. I did a few things
differently and I try to document my rationale here.
Use a new directory
For what its worth; I think that it is important to work in a new directory; to
treat this workshop as a separate project.
Commands to create a new directory for today's workshop, set it as the current
working directory and set up an empty git repository:
mkdir --parents ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& cd ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& git init \
&& git branch -m main
Use βuv tool runβ
In my experience running entry-level Python workshops, initial setup is always
time consuming. Especially installing an appropriate version of Python, possibly
setting up a virtual environment and obtaining the correct libraries. Being able
to help attendees who may be using Windows, Mac or Linux is challenging. This is
both one of the hardest parts of a session and one of the first!
I tried to side step some of the issues here by using uv. Most of the group
used Rye and my neighbour was unsure. Trying to help I suggested using pipx to
install Pelican. I had started out using pipx. However first you need to
install pipx; the pipx install instructions for Windows suggest using
Scoop; that means you need the installation instructions for
Scoop⦠it was turtles all of the way down. The neighbour was confident with
Conda so I left them to it.
In the end I preferred uv tool run over pipx for a couple of reasons:
The uv installation instructions for Windows only use PowerShell and Scoop
isn't necessary.
uv tool run supports specifying additional packages using --with; which
will be relevant in the next section.
Command to run the quick-start:
uv tool run "--from=pelican[markdown]" pelican-quickstart
Many of the default answers where fine; a couple I defined are:
What is your time zone? [Europe/Rome] Europe/London
Do you want to generate a tasks.py/Makefile to automate generation and
publishing? (Y/n) n
Use YAML metadata
I want to use YAML metadata because it is well supported by my editor
configuration. It is also supported by the yaml-metadata plugin. At the
minute it is possible to just use a
pipx run --spec=pelican-yaml-metadata pelican command because the plugin
depends on everything necessary. However I prefer the more transparent
approach below.
Command to create a directory to address a warning and run the site locally:
uv tool run --with-requirements=requirements.txt pelican --autoreload --listen
[23:12:13] WARNING Unable to watch path '/home/maxwell-k/github.com/maxwell-k/2024-09-18-pybelfast-workshop/content/images' as it does not exist. utils.py:843
Initially this workflow produced the following error:
Branch "main" is not allowed to deploy to github-pages due to environment protection rules.
To resolve this I configured permissions: go to βSettingsβ, then βEnvironmentsβ,
then βgithub-pagesβ and make sure βmainβ can deploy to this environment.
Allowing manually running the workflow by adding workflow_dispatch: is helpful
for testing the repository settings.