Run Renovate locally

Posted on Wednesday 26 March 2025 in Computers.

Why? So that feedback is available quickly; so that I can efficiently iterate on Renovate configuration.

… So that I can more easily configure automated dependency updates. Renovate creates pull requests to update dependencies and supports configuration to automatically merge certain updates.

… So that I can efficiently pin and update dependencies in a controlled manner.

… So that I avoid:

  1. Unexpected breakage from incompatible dependencies and
  2. Manual work to keep dependencies up to date and
  3. Becoming “stuck” on old out-dated software versions.

I think that Renovate is a great software tool to help keep software dependencies up to date. I use Renovate both locally and via the "Mend Renovate Community Cloud". The rest of this post sets out the steps I use to run Renovate locally.

Why now? I'm publishing this post today because, two days ago, setuptools released a new major version — 78 — that dropped support for uppercase or dash characters in setup.cfg. This led to discussion and a subsequent release reinstating the earlier behaviour. I am a fan of setuptools, which I have used extensively, and I fully support its maintainers. This was a helpful reminder of the value in pinning dependencies and automating updates. Renovate makes it straightforward to ensure an up to date, pinned, build backend is specified in pyproject.toml.

What? Ensure Renovate can run locally with a suitable version of Node.js and suitable credentials.

Prerequisites:

All of this software apart from Renovate itself can be installed from the system package repositories on Fedora 40.

Command to install pre-requisites on Fedora 40:

sudo dnf install \
    gh \
    jq \
    nodejs-npm \
    python3-keyring

A compatible version of Node.js

Install a version of Node.js that matches the engines key in package.json. Today that is:

"node": "^20.15.1 || ^22.11.0",

Command to show the current node version:

npm version --json | jq --raw-output .node

Example output:

20.18.2

If a suitable version is not available from the system package manager then I recommend fnm.

A GitHub authentication token

Depending upon the repository configuration, if Renovate is run without a GitHub token it will either display a warning or fail. A example warning message is below:

WARN: GitHub token is required for some dependencies (repository=local)

For me, the easiest way to securely retrieve and store an access token for GitHub is to use the command line interface (CLI). The CLI stores a token for its own use in the system keyring. First ensure the CLI is installed.

Command to check status of the token used by gh:

gh auth status --show-token

Command to retrieve the token used by gh:

keyring get gh:github.com ""

A suitable shell command

Command to run Renovate with debugging output:

GITHUB_COM_TOKEN=$(keyring get gh:github.com "") \
LOG_LEVEL=debug \
npm exec --yes renovate -- --platform=local

Exploring rate limiting with NGINX

Posted on Thursday 6 February 2025 in Computers.

Why? To better understand rate limiting in NGINX; working through this 2017 blog post: https://blog.nginx.org/blog/rate-limiting-nginx.

What? Set up an Ubuntu 20.04 LTS (Focal Fossa) container running NGINX. Load test using the artillery command-line-interface.

Prerequisites:

  1. Incus installed and configured, with a default profile that includes networking and storage.
  2. Local networking configured to integrate Incus and resolved.

A system container

In brief the following steps will use Incus and https://cloud-init.io/ to:

  1. Start a container from an Ubuntu 22.04 Focal Fossa image
  2. Update the local package metadata and upgrade all packages
  3. Install NGINX
  4. Serve an HTML page containing "Hello world"
  5. Rate limit requests to one per second

Contents of config.yaml:

config:
  user.vendor-data: |
    #cloud-config
    package_update: true
    package_upgrade: true
    packages: [nginx]
    write_files:
      - content: |
          limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;

          server {
              listen 80;
              listen [::]:80;

              server_name c1.incus;

              root /var/www/c1.incus;
              index index.html;

              location / {
                  try_files $uri $uri/ =404;
                  limit_req zone=mylimit;
              }
          }
        path: /etc/nginx/conf.d/c1.conf
      - content: |
          <!doctype html>
          <html lang="en-US">
            <head>
              <meta charset="utf-8" />
              <title>Hello world</title>
            </head>
            <body>
              <p>Hello world</p>
            </body>
          </html>
        path: /var/www/c1.incus/index.html

Command to launch an Incus container called c1 using the above configuration:

incus launch images:ubuntu/focal/cloud c1 < config.yaml

Load testing software

The latest version of artilleryartillery@2.0.22 — requires a specific version of Node.js: >= 22.13.0. This can be installed using Fast Node Manager.

Command to install the specific version of Node.js:

fnm install v22.13.1

Command to run the latest artillery:

fnm exec --using=v22.13.1 npm exec --yes artillery@2.0.22 -- --version

Demonstrate the default 503 HTTP response status code

Command to run the test:

fnm exec --using=v22.13.1 npm exec artillery@2.0.22 -- quick http://c1.incus

Partial output:

✂
http.codes.200: ................................................................ 1
http.codes.503: ................................................................ 99
http.downloaded_bytes: ......................................................... 20394
http.request_rate: ............................................................. 100/sec
http.requests: ................................................................. 100
✂

The test ran for 1 second and sent 100 requests per second for a total of 100 requests. 1 response had the 200 HTTP response status code and 99 had the 503 response status code.

Configure and demonstrate another HTTP response status code

Add another directive to the location block:

--- c1.conf
+++ c1.conf
@@ -12,5 +12,6 @@
     location / {
         try_files $uri $uri/ =404;
         limit_req zone=mylimit;
+        limit_req_status 429;
     }
 }

Command to reload the NGINX configuration:

incus exec c1 -- systemctl reload nginx

Partial output from re-running the test with artillery quick:

✂
http.codes.200: ................................................................ 1
http.codes.429: ................................................................ 99
✂

The HTTP response status codes changed from 503 to 429.

Updated 2025-02-11: use a simple test from the command line without a test script.


Configuration as code for DNS

Posted on Wednesday 6 November 2024 in Computers.

I've wanted to move the DNS configuration for my domain into an open source infrastructure as code solution for some time. The first notes I made on the topic are from 2019!

I started managing keithmaxwell.uk in Route 53 using a web browser. Route 53 is the managed DNS service from Amazon Web Services (AWS). To me, two benefits of an infrastructure as code solution see are: traceability and portability. Portability would help with a move away from AWS to another managed DNS provider.

I'm aware of a range of specialised tools. I've ruled out Terraform because it isn't open source. Below I share some brief notes that I made about the options:

https://github.com/octodns/octodns

  • implemented in Python
  • typical configuration is in YAML
  • documented in the README.md
  • MIT licensed
  • project appears active, originally used at GitHub

https://github.com/AnalogJ/lexicon

  • implemented in Python
  • typically used as a CLI or Python API to manipulate DNS records
  • some links in the online documentation 404
  • MIT licensed
  • project appears active

https://github.com/StackExchange/dnscontrol

  • implemented in Go
  • typical configuration is in a Domain Specific Language (DSL) that is similar to JavaScript
  • detailed documentation including a migration guide
  • MIT licensed
  • project appears active, originated at "StackOverflow / StackExchange"

https://github.com/Netflix/denominator

  • implemented in Java
  • typically used as a CLI or Java API to manipulate DNS records
  • documented in the README.md
  • Apache 2 licensed
  • last commit was eight years ago

https://github.com/pulumi/pulumi-aws

  • implemented in Go
  • supports configuration in Python or JavaScript
  • detailed documentation, for example about Route 53
  • Apache 2 licensed
  • project appears active

https://github.com/opentofu/opentofu

  • implemented in Go
  • typical configuration is in a DSL, also supports JSON configuration
  • detailed documentation
  • MPL 2.0 licensed
  • the project is around a year old and appears to be active

All of the options above support Route 53.

Sometimes a distinction is made between declarative and imperative tools. Viewed that way I'm looking for a declarative tool for this task.

I have used Pulumi for small projects and I have significant experience with the versions of Terraform that OpenTofu was forked from. From that personal experience I expect there will be a requirement to manage state data if adopting Pulumi or Open Tofu.

After reviewing these options I've decided to start with dnscontrol, for three reasons:

  1. The high quality documentation especially the migration guide
  2. The absence of a requirement to manage state and
  3. The apparent health of the open source project.

Serial cable for Raspberry Pi 2 B

Posted on Tuesday 5 November 2024 in Computers.

I have two or three Raspberry Pi 2 B single board computers. I've had them a long time and they've mostly been gathering dust. I now plan to make use of them. I want to work with them efficiently, so inspired by this 2021 blog post I decided to buy a USB to serial converter. Another popular author and YouTuber has written about the same topic. The serial converter cost about £10 and should be delivered in a few days. In doing a little research before the purchase, I looked at the schematic and I learnt:

The remaining pins are all general-purpose 3V3 pins, meaning that the outputs are set to 3.3 volts and the inputs are tolerant of 3.3 volts.

https://www.futurelearn.com/info/courses/robotics-with-raspberry-pi/0/steps/75878

I also came across a forum post with reassuring, beginner friendly, explanation about serial communication.

I have ordered a couple of cases too.


Install soft-serve on Debian

Posted on Sunday 29 September 2024 in Computers.

This is a simple deployment of soft-serve on Debian 12 Bookworm using Incus. Eventually I will install this service onto hardware running Debian directly. At this stage Incus is a great way to experiment in disposable system containers.

In case you aren't already aware system containers, as implemented by LXD and Incus, simulate a full operating system. This is in contrast to the single process typically packaged in a Docker, Podman or Kubernetes container. Here I'm going to configure and test a systemd service so Incus is a good fit.

One extra piece of complexity is that I use Cog and Python to get up to date public SSH keys from GitHub.

Pre-requisites: curl, GPG, Incus and the Incus / systemd-resolved integration.

Process

Command to download the GPG key and remove the base 64 encoding:

curl -s https://repo.charm.sh/apt/gpg.key \
| gpg --dearmor -o charm.gpg

Save the following text as ./charm.sources:

Types: deb
URIs: http://repo.charm.sh/apt/
Suites: *
Components: *
Signed-By: /etc/apt/keyrings/charm.gpg

Save the following as soft-serve.conf:

# Based upon https://github.com/charmbracelet/soft-serve/blob/main/.nfpm/soft-serve.conf
# vim: set ft=conf.cog :
#
# [[[cog
# import urllib.request
# f = urllib.request.urlopen("https://github.com/maxwell-k.keys")
# cog.outl(f"SOFT_SERVE_INITIAL_ADMIN_KEYS='{f.read().decode().strip()}'")
# ]]]
SOFT_SERVE_INITIAL_ADMIN_KEYS='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2ey56D7MlKkZXZZPu6vY1Y/f5KM8vQ8gghiWCbQlUkLlJAXWEKzPymU3FRSJO8EkrNvHw+7DlMizhpjOLyfSNKfxbRkbs/3DYUd7mg5Y/a2z+EMDL975mNxkd7PFwjnDF0MFXnfuVYUqCLZMNoUyVRE8sZUuVgrkVWeME9Wqqh/69v4W//V5ImjqxCFXnI73ATrot0I1hRDPM339TW/EVMakxBjyutYW5/W7bWCu1nEu7T3SZrQZLrVNrp2FHL9cy4Dl9iwyL0Jhp72o9NiaKjRUZqM9OGz5dGRZ3ALmPddqLJP6PUAPaLRPl14ef09ErXmQFn27RNT2zj3IJK5NF'
# [[[end]]]

Command to launch a container and run soft-serve:

incus launch images:debian/12 c1 \
&& incus exec c1 -- sh -c "until systemctl is-system-running >/dev/null 2>&1 ; do : ; done" \
&& incus exec c1 -- apt-get update \
&& incus exec c1 -- apt-get upgrade \
&& incus exec c1 -- apt-get install --yes ca-certificates \
&& incus file push charm.gpg c1/etc/apt/keyrings/charm.gpg \
&& incus file push charm.sources c1/etc/apt/sources.list.d/charm.sources \
&& incus exec c1 -- apt-get update \
&& incus exec c1 -- apt-get install --yes soft-serve \
&& incus file push soft-serve.conf c1/etc/soft-serve.conf \
&& incus exec c1 -- systemctl enable --now soft-serve.service

Command to display user information:

ssh -p 23231 c1.incus info

Expected output:

Username: admin
Admin: true
Public keys:
  ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2ey56D7MlKkZXZZPu6vY1Y/f5KM8vQ8gghiWCbQlUkLlJAXWEKzPymU3FRSJO8EkrNvHw+7DlMizhpjOLyfSNKfxbRkbs/3DYUd7mg5Y/a2z+EMDL975mNxkd7PFwjnDF0MFXnfuVYUqCLZMNoUyVRE8sZUuVgrkVWeME9Wqqh/69v4W//V5ImjqxCFXnI73ATrot0I1hRDPM339TW/EVMakxBjyutYW5/W7bWCu1nEu7T3SZrQZLrVNrp2FHL9cy4Dl9iwyL0Jhp72o9NiaKjRUZqM9OGz5dGRZ3ALmPddqLJP6PUAPaLRPl14ef09ErXmQFn27RNT2zj3IJK5NF

Commands to import an example repository:

ssh -p 23231 c1.incus repository import dotfiles https://github.com/maxwell-k/dotfiles

Command to connect interactively:

ssh -p 23231 c1.incus

Decisions

Decided to use https for the apt repository

HTTP is sometimes preferred for apt package distribution so that package data can be cached. For this repository HTTP redirects to HTTPS; so it is necessary to use HTTPS. Using HTTPS here means that an extra step installing the ca-certificates package is required.

Keyring is stored in ‘/etc/apt/keyrings’

The recommended locations for keyrings are /usr/share/keyrings for keyrings managed by packages, and /etc/apt/keyrings for keyrings managed by the system operator.

-- https://manpages.debian.org/unstable/apt/sources.list.5.en.html

References

After writing most of this post I found a blog post from an engineer at the company behind soft serve; it covers similar material to this post.


First post at the PyBelfast workshop

Posted on Wednesday 18 September 2024 in Computers.

I created this repository at a local meetup. In this post I am loosely following the instructions provided by our host Kyle. I did a few things differently and I try to document my rationale here.

Use a new directory

For what its worth; I think that it is important to work in a new directory; to treat this workshop as a separate project.

Commands to create a new directory for today's workshop, set it as the current working directory and set up an empty git repository:

mkdir --parents ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& cd ~/github.com/maxwell-k/2024-09-18-pybelfast-workshop \
&& git init \
&& git branch -m main

Use ‘uv tool run’

In my experience running entry-level Python workshops, initial setup is always time consuming. Especially installing an appropriate version of Python, possibly setting up a virtual environment and obtaining the correct libraries. Being able to help attendees who may be using Windows, Mac or Linux is challenging. This is both one of the hardest parts of a session and one of the first!

I tried to side step some of the issues here by using uv. Most of the group used Rye and my neighbour was unsure. Trying to help I suggested using pipx to install Pelican. I had started out using pipx. However first you need to install pipx; the pipx install instructions for Windows suggest using Scoop; that means you need the installation instructions for Scoop… it was turtles all of the way down. The neighbour was confident with Conda so I left them to it.

In the end I preferred uv tool run over pipx for a couple of reasons:

  1. The uv installation instructions for Windows only use PowerShell and Scoop isn't necessary.

  2. uv tool run supports specifying additional packages using --with; which will be relevant in the next section.

Command to run the quick-start:

uv tool run "--from=pelican[markdown]" pelican-quickstart

Many of the default answers where fine; a couple I defined are:

What is your time zone? [Europe/Rome] Europe/London

Do you want to generate a tasks.py/Makefile to automate generation and publishing? (Y/n) n

Use YAML metadata

I want to use YAML metadata because it is well supported by my editor configuration. It is also supported by the yaml-metadata plugin. At the minute it is possible to just use a pipx run --spec=pelican-yaml-metadata pelican command because the plugin depends on everything necessary. However I prefer the more transparent approach below.

Command to create a directory to address a warning and run the site locally:

uv tool run --with-requirements=requirements.txt pelican --autoreload --listen

Then browse to http://127.0.0.1:8000/.

The command above may output a warning:

[23:12:13] WARNING Unable to watch path '/home/maxwell-k/github.com/maxwell-k/2024-09-18-pybelfast-workshop/content/images' as it does not exist. utils.py:843

Commands to address the warning:

mkdir --parents content/images \
&& touch content/images/.keep

Use the official GitHub actions workflow

I adopted the official workflow — https://github.com/getpelican/pelican/blob/main/.github/workflows/github_pages.yml. A helpful feature of this workflow is that SITEURL will "default to the URL of your GitHub Pages site, which is correct in most cases." Using this official workflow also allows me to remove publishconf.py.

Initially this workflow produced the following error:

Branch "main" is not allowed to deploy to github-pages due to environment protection rules.

To resolve this I configured permissions: go to ‘Settings’, then ‘Environments’, then ‘github-pages‛ and make sure ‘main‛ can deploy to this environment.

Allowing manually running the workflow by adding workflow_dispatch: is helpful for testing the repository settings.