Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

August 29, 2025

Noah Meyerhans

Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host’s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up.

I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init’s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated.

This was surprising because the apt-get invocations occur in a cloud-init sequence that’s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, “systemd: network-online.target reached before IPv6 address is ready”. The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one

While it’s a bit counterintuitive, the systemd-networkd behavior is correct, and it’s also not something we’d want to override in the cloud images. Without explicit configuration, systemd can’t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured.

For these same reasons, we can’t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd’s default behavior.

What’s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I’ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd’s default behavior and wait for IPv6 connectivity specifically.

What won’t work

Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn’t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we’ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands “very early in the boot process”, but it runs too early: it runs before we’ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful.

Instead of using the bootcmd facility, to simply reload systemd’s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:

 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload

But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:

root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds

At this point, it’s looking like there are few options left!

What eventually worked

I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1

The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that’s executed just before apt’s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we’re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:

 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content: |
APT::Update::Pre-Invoke { "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6"; }

This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It’s only during address configuration that it’ll block for a noticeable amount of time, but that’s what we want.

This solution isn’t entirely correct, though, because it’s only apt-get that’s actually affected by it. Other service that start after the system is ostensibly “online” might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2

The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it’s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is

bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]

In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won’t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM’s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)

write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content: |
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it

One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

29 August, 2025 04:38PM by Noah Meyerhans (frodo+blog@morgul.net)

August 27, 2025

Russell Coker

ZRAM and VMs

I’ve just started using zram for swap on VMs. The use of compression for swap in Linux apparently isn’t new, it’s been in the Linux kernel since version 3.2 (since 2012). But until recent years I hadn’t used it. When I started using Mobian (the Debian distribution for phones) zram was in the default setup, it basically works and I never needed to bother with it which is exactly what you want from such a technology. After seeing it’s benefits in Mobian I started using it on my laptops where it worked well.

Benefits of ZRAM

ZRAM means that instead of paging data to storage it is compressed to another part of RAM. That means no access to storage which is a significant benefit if storage is slow (typical for phones) or if storage wearing out is a problem.

For servers you typically have SSDs that are fast and last for significant write volumes, for example the 120G SSDs referenced in my blog post about swap (not) breaking SSD [1] are running well in my parents’ PC because they outlasted all the other hardware connected to them and 120G isn’t usable for anything more demanding than my parents use nowadays. Those are Intel 120G 2.5″ DC grade SATA SSDs. For most servers ZRAM isn’t a good choice as you can just keep doing IO on the SSDs for years.

A server that runs multiple VMs is a special case because you want to isolate the VMs from each other. Support for quotas for storage IO in Linux isn’t easy to configure while limiting the number of CPU cores is very easy. If a system or VM using ZRAM for swap starts paging excessively the bottleneck will be CPU, this probably isn’t going to be great on a phone with a slow CPU but on a server class CPU it will be less of a limit. Whether compression is slower or faster than SSD is a complex issue but it will definitely be just a limit for that VM. When I setup a VM server I want to have some confidence that a DoS attack or configuration error on one VM isn’t going to destroy the performance of other VMs. If the VM server has 4 cores (the smallest VM server I run) and no VM has more than 2 cores then I know that the system can still run adequately if half the CPU performance is being wasted.

Some servers I run have storage limits that make saving the disk space for swap useful. For servers I run in Hetzner (currently only one server but I have run up to 6 at various times in the past) the storage is often limited, Hetzner seems to typically have storage that is 8* the size of RAM so if you have many VMs configured with the swap that they might need in the expectation that usually at most one of them will be actually swapping then it can make a real difference to usable storage. 5% of storage used for swap files isn’t uncommon or unreasonable.

Big Servers

I am still considering the implications of zram on larger systems. If I have a ML server with 512G of RAM would it make sense to use it? It seems plausible that a system might need 550G of RAM and zram could make the difference between jobs being killed with OOM and the jobs just completing. The CPU overhead of compression shouldn’t be an issue as when you have dozens of cores in the system having one or two used for compression is no big deal. If a system is doing strictly ML work there will be a lot of data that can’t be compressed, so the question is how much of the memory is raw input data and the weights used for calculations and how much is arrays with zeros and other things that are easy to compress.

With a big server nothing less than 32G of swap will make much difference to the way things work and if you have 32G of data being actively paged then the fastest NVMe devices probably won’t be enough to give usable performance. As zram uses one “stream” per CPU code if you have 44 cores that means 44 compression streams which should handle greater throughput. I’ll write another blog post if I get a chance to test this.

27 August, 2025 05:19AM by etbe

hackergotchi for Matthew Palmer

Matthew Palmer

StrongBox: Simple, Safe Data Encryption for Rust

Some time ago, I wanted to encrypt a bunch of data in an application I was writing in Rust, mostly to be stored in a database, but also session cookies and sensitive configuration variables. Since Rust is widely known as a secure-yet-high-performance programming language, I was expecting that there would be a widely-used crate that gave me a secure, high-level interface to strong, safe cryptography. Imagine my surprise when I discovered that just… didn’t seem to exist.

Don’t get me wrong: Rust is replete with fast, secure, battle-tested cryptographic primitives. The RustCrypto group provides all manner of robust, widely-used crates for all manner of cryptography-related purposes. They’re the essential building blocks for practical cryptosystems, but using them directly in an application is somewhat akin to building a car from individual atoms of iron and carbon.

So I wrote my own high-level data encryption library, called it StrongBox, and have been happily encrypting and decrypting data ever since.

Cryptography So Simple Even I Can’t Get It Wrong

The core of StrongBox is the StrongBox trait, which has only two methods: encrypt and decrypt, each of which takes just two arguments. The first argument is the plaintext (for encrypt) or the ciphertext (for decrypt) to work on. The second argument is the encryption context, for use as Authenticated Additional Data, an important part of many uses of encryption.

There’s essentially no configuration or parameters to get wrong. You can’t choose the encryption algorithm, or block cipher mode, and you don’t have to worry about generating a secure nonce. You create a StrongBox with a key, and then you call encrypt and decrypt. That’s it.

Practical Cryptographic Affordances

Ok, ok… that’s not quite it. Because StrongBox is even easier to use than what I’ve described, thanks to the companion crate, StructBox.

When I started using StrongBox “in the wild”, it quickly became clear that what I almost always wanted to encrypt in my application wasn’t some ethereal “plaintext”. I wanted to encrypt things, specifically structs (and enums). So, through the magic of Rust derive macros, I built StructBox, which provides encrypt and decrypt operations on any Serde-able type. Given that using Serde encoders can be a bit fiddly to use, it’s virtually easier to get an encrypted, serialized struct than it is to get a plaintext serialized struct.

Key Problems in Cryptography

The thing about cryptography is that it largely turns all data security problems into key management problems. All the fancy cryptographic wonkery is for naught if you don’t manage the encryption keys well.

So, most of the fancy business in StrongBox isn’t the encryption and decryption, but instead solving problems around key management.

Different Keys for Different Purposes

Using the same key for all of your cryptographic needs is generally considered a really bad idea. It opens up all manner of risks, that are easily avoided if you use different keys for different things. However, having to maintain a big pile of different keys is a nightmare, so nobody’s going to do that.

Enter: key derivation. Create one safe, secure “root” key, and then use a key derivation function to spawn as many other keys as you need. Different keys for each database column, another one to encrypt cookies, and so on.

StrongBox supports this through the StemStrongBox type. You’ll typically start off by creating a StemStrongBox with the “root” key, and then derive whatever other StrongBoxes you need, for encrypting and decrypting different kinds of data.

You Spin Me Round…

Sometimes, keys need to be rotated. Whether that’s because you actually know (or even have reason to suspect) someone has gotten the key, or just because you’re being appropriately paranoid, sometimes key rotation has to happen.

As someone who has had to rotate keys in situations where such an eventuality was not planned for, I can say with some degree of authority: it absolutely sucks to have to do an emergency key rotation in a system that isn’t built to make that easy. That’s why StrongBox natively supports key rotation. Every StrongBox takes one encryption key, and an arbitrary number of decryption keys, and will automatically use the correct key to decrypt ciphertexts.

Will You Still Decrypt Me, Tomorrow?

In addition to “manual” key rotation, StrongBox also supports time-based key rotation with the RotatingStrongBox type. This comes in handy when you’re encrypting a lot of “ephemeral” data, like cookies (or server-side session data). It provides a way to automatically “expire” old data, and prevents attacks that become practical when large amounts of data are encrypted using a single key.

Invasion of the Invisible Salamanders!

I mostly mention this just because I love the name, but there is a kind of attack possible in common AEAD modes called the invisible salamanders attack. StrongBox implements mitigations against this, by committing to the key being used so that an attacker can’t forge a ciphertext that decrypts validly to different plaintexts when using different keys. This is why I love cryptography: everything sounds like absolute goddamn magic.

Call Me Crazy, Support Me Maybe?

If you’re coding in Rust (which you probably should be), encrypting your stored data (which you definitely should be), and StrongBox makes your life easier (which it really will), you can show your appreciation for my work by contributing to my open source code-fund. Simply by shouting me a refreshing beverage, you’ll be helping me out, and helping to grow the software commons. Alternately, if you’re looking for someone to Speak Rust to Computers on a professional basis, I’m available for contracts or full-time remote positions.

27 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 25, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

The comedy of computation, or, how I learned to stop worrying and love obsolescence

This post is an unpublished review for The comedy of computation, or, how I learned to stop worrying and love obsolescence

“The Comedy of Computation” is not an easy book to review. It is a much enjoyable book that analyzes several examples of how “being computational” has been approached across literary genres in the last century — how authors of stories, novels, theatrical plays and movies, focusing on comedic genres, have understood the role of the computer in defining human relations, reactions and even self-image.

Mangrum structures his work in six thematic chapters, where he presents different angles on human society: How have racial stereotypes advanced in human imagination and perception about a future where we interact with mechanical or computational partners (from mechanical tools performing jobs that were identified with racial profiles to intelligent robots that threaten to control society); the genericity of computers –and people– can be seen as generic, interchangeable characters, often fueled by the tendency people exhibit to confer anthropomorphic qualities to inanimate objects; people’s desire to be seen as “truly authentic”, regardless of what it ultimately means; romantic involvement and romance-led stories (with the computer seen as a facilitator for human-to-human romances, distractor away from them, or being itself a part of the couple); and the absurdity in antropomorphization, in comparing fundamentally different aspects such as intelligence and speed at solving mathematical operations, as well as the absurdity presented blatantly as such by several techno-utopian visions.

But presenting this as a linear set of concepts that are presented does not do justice to the book. Throughout the sections of each chapter, a different work serves as the axis — Novels and stories, Hollywood movies, Broadway plays, some covers for the Time magazine, a couple of presenting the would-be future, even a romantic comedy entirely written by “bots”. And for each of them, Benjamin Mangrum presents a very thorough analysis, drawing relations and comparing with contemporary works, but also with Shakespeare, classical Greek myths, and a very long etcætera. This book is hard to review because of the depth of work the author did: Reading it repeatedly made me look for other works, or at least longer references for them.

Still, despite being a work with such erudition, Mangrum’s text is easy and pleasant to read, without feeling heavy or written in an overly academic style. I very much enjoyed reading this book. It is certainly not a technical book about computers and society in any way; it is an exploration of human creativity and our understanding of the aspects the author has found as central to understanding the impact of computing on humankind.

However, there is one point I must mention before closing: I believe the editorial decision to present the work as a running text, with all the material conceptualized as footnotes presented as a separate, over 50 page long final chapter, detracts from the final result. Personally, I enjoy reading the footnotes because they reveal the author’s thought processes, even if they stray from the central line of thought. Even more, given my review copy was a PDF, I could not even keep said chapter open with one finger, bouncing back and forth. For all purposes, I missed out on the notes; now that I finished reading and stumbled upon that chapter, I know I missed an important part of the enjoyment.

25 August, 2025 04:35PM

August 22, 2025

Russell Coker

Dell T320 H310 RAID and IT Mode

The Problem

Just over 2 years ago my Dell T320 server had a motherboard failure [1]. I recently bought another T320 that had been gutted (no drives, PSUs, or RAM) and put the bits from my one in it.

I installed Debian and the resulting installation wouldn’t boot, I tried installing with both UEFI and BIOS modes with the same result. Then I realised that the disks I had installed were available even though I hadn’t gone through the RAID configuration (I usually make a separate RAID-0 for each disk to work best with BTRFS or ZFS). I tried changing the BIOS setting for SATA disks between “RAID” and “AHCI” modes which didn’t change things and realised that the BIOS setting in question probably applies to the SATA connector on the motherboard and that the RAID card was in “IT” mode which means that each disk is seen separately.

If you are using ZFS or BTRFS you don’t want to use a RAID-1, RAID-5, or RAID-6 on the hardware RAID controller, if there are different versions of the data on disks in the stripe then you want the filesystem to be able to work out which one is correct. To use “IT” mode you have to flash a different unsupported firmware on the RAID controller and then you either have to go to some extra effort to make it bootable or have a different device to boot from.

The Root Causes

Dell has no reason to support unusual firmware on their RAID controllers. Installing different firmware on a device that is designed for high availability is going to have some probability of data loss and perhaps more importantly for Dell some probability of customers returning hardware during the support period and acting innocent about why it doesn’t work. Dell has a great financial incentive to make it difficult to install Dell firmware on LSI cards from other vendors which have equivalent hardware as they don’t want customers to get all the benefits of iDRAC integration etc without paying the Dell price premium.

All the other vendors have similar financial incentives so there is no official documentation or support on converting between different firmware images. Dell’s support for upgrading the Dell version is pretty good, but it aborts if it sees something different.

The Attempts

I tried following the instructions in this document to flash back to Dell firmware [2]. This document is about the H310 RAID card in my Dell T320 AKA a “LSI SAS 9211-8i”. The sas2flash.efi program didn’t seem to do anything, it returned immediately and didn’t give an error message.

This page gives a start of how to get inside the Dell firmware package but doesn’t work [3]. It didn’t cover the case where sasdupie aborts with an error because it detects the current version as “00.00.00.00” not something that the upgrade program is prepared to upgrade from. But it’s a place to start looking for someone who wants to try harder at this.

This forum post has some interesting information, I gave up before trying it, but it may be useful for someone else [4].

The Solution

Dell tower servers have as a standard feature an internal USB port for a boot device. So I created a boot image on a spare USB stick and installed it there and it then loads the kernel and mounts the filesystem from a SATA hard drive. Once I got that working everything was fine. The Debian/Trixie installer would probably have allowed me to install an EFI device on the internal USB stick as part of the install if I had known what was going to happen.

The system is now fully working and ready to sell. Now I just need to find someone who wants “IT” mode on the RAID controller and hopefully is willing to pay extra for it.

Whatever I sell the system for it seems unlikely to cover the hours I spent working on this. But I learned some interesting things about RAID firmware and hopefully this blog post will be useful to other people, even if only to discourage them from trying to change firmware.

22 August, 2025 03:57PM by etbe

August 21, 2025

Antoine Beaupré

Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --redice-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

21 August, 2025 07:16PM

hackergotchi for Matthew Palmer

Matthew Palmer

Progress on my open source funding experiment

When I recently announced that I was starting an open source crowd-funding experiment, I wasn’t sure what would happen. Perhaps there’d be radio silence, or a huge out-pouring of interest from people who wanted to see more open source code in the world. What’s happened so far has been… interesting.

I chose to focus on action-validator because it’s got a number of open feature requests, and it solves a common problem that people have. The thing is, I’ve developed and released a lot of open source over the multiple decades I’ve been noodling around with computers. Much of that has been of use to many people, the overwhelming majority of whom I will never, ever meet, hear from, or even know that I’ve helped them out.

One person, however, I do know about – a generous soul named Andy, who (as far as I know) doesn’t use action-validator, but who does use another tool I wrote some years ago: lvmsync. It’s somewhat niche, essentially “rsync for LVM-backed block devices”, so I’m slightly surprised that it’s my most-starred repository, at nearly 400(!) stars. Andy is one of the people who finds it useful, and he was kind enough to reach out and offer a contribution in thanks for lvmsync existing.

In the spirit of my open source code-fund, I applied Andy’s contribution to the “general” pool, and as a result have just released action-validator v0.8.0, which supports a new --rootdir command-line option, fixing action-validator issue #54. Everyone who uses --rootdir in their action-validator runs has Andy to thank, and I thank him too.

This is, of course, still early days in my experiment. You can be like Andy, and make the open source world a better place, by contributing to my code-fund, and you can get your name up in lights, too. Whether you’re an action-validator user, have gotten utility from any of the other things I’ve written, or just want to see more open source code in the world, your contribution is greatly appreciated.

21 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 20, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

x13binary 1.1.61.1 on CRAN: Micro Fix

The x13binary team is happy to share the availability of Release 1.1.61.1 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today.

This release responds to a recent change in gfortran version 15 which now picks up a missing comma in a Fortran format string for printing output. The change is literally a one-char addition which we also reported upstream. At the same time this release also updates one README.md URL to an archive.org URL of an apparently deleted reference. There is now also an updated upstream release 1.1-62 which we should package next.

Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 August, 2025 09:51PM

Sven Hoexter

Istio: Connect via a VirtualService to External IP Addresses

Rant - I've a theory about istio: It feels like a software designed by people who hate the IT industry and wanted revenge. So they wrote a software with so many odd points of traffic interception (e.g. SNI based traffic re-routing) that's completely impossible to debug. If you roll that out into an average company you completely halt the IT operations for something like a year.

On topic: I've two endpoints (IP addresses serving HTTPS on a none standard port) outside of kubernetes, and I need some rudimentary balancing of traffic. Since istio is already here one can levarage that, combining the resource kinds ServiceEntry, DestinationRule and VirtualService to publish a service name within the istio mesh. Since we do not have host names and DNS for those endpoint IP addresses we need to rely on istio itself to intercept the DNS traffic and deliver a virtual IP address to access the service. The sample given here leverages the exportTo configuration to make the service name only available in the same namespace. If you need broader access remove or adjust that. As usual in kubernetes you can resolve the name also as FQDN, e.g. acme-service.mynamespace.svc.cluster.local.

---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  ports:
    - number: 12345
      name: acmeglue
      protocol: HTTPS
  resolution: STATIC
  location: MESH_EXTERNAL
  # limit the availability to the namespace this resource is applied to
  # if you need cross namespace access remove all the `exportTo`s in here
  exportTo:
    - "."
  # use `endpoints:` in this setup, `addreses:` did not work
  endpoints:
    # region1
    - address: 192.168.0.1
      ports:
        acmeglue: 12345
    # region2
     - address: 10.60.48.50
       ports:
        acmeglue: 12345
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: acme-service
spec:
  host: acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  trafficPolicy:
    loadBalancer:
      simple: LEAST_REQUEST
    connectionPool:
      tcp:
        tcpKeepalive:
          # We have GCP service attachments involved with a 20m idle timeout
          # https://cloud.google.com/vpc/docs/about-vpc-hosted-services#nat-subnets-other
          time: 600s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: acme-service
spec:
  hosts:
    - acme-service
  # limit the availability to the namespace this resource is applied to
  exportTo:
    - "."
  http:
  - route:
    - destination:
        host: acme-service
    retries:
      attempts: 2
      perTryTimeout: 2s
      retryOn: connect-failure,5xx
---
# Demo Deployment, istio configuration is the important part
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foobar
  labels:
    app: foobar
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foobar
  template:
    metadata:
      labels:
        app: foobar
        # enable istio sidecar
        sidecar.istio.io/inject: "true"
      annotations:
        # Enable DNS capture and interception, IP resolved will be in 240.240/16
        # If you use network policies you've to allow egress to this range.
        proxy.istio.io/config: |
          proxyMetadata:
            ISTIO_META_DNS_CAPTURE: "true"
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Now we can exec into the deployed pod, do something like curl -vk https://acme-service:12345, and it will talk to one of the endpoints defined in the ServiceEntry via an IP address out of the 240.240/16 Class E network.

Documentation
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution
https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB
https://istio.io/latest/docs/ops/configuration/traffic-management/dns-proxy/#sidecar-mode

20 August, 2025 04:07PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.6.3-1 on CRAN: Minor Upstream Bug Fixes

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1268 other packages on CRAN, downloaded 41 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 642 times according to Google Scholar.

Conrad made three minor bug fix releases since the 4.6.0 release last month. We need to pace releases at CRAN so we do not immediately upload there on each upstream release—and then CRAN also had the usual (and well-deserved) summer rest leading to a slight delay relative to the last upstream. The minor changes in the three releases are summarized below. All our releases are always available via the GitHub repo and hence also via r-universe, and still rigorously tested via our own reverse-dependency checks. We also note that the package once again passed with flying colours and no human intervention which remains impressive given the over 1200 reverse dependencies.

Changes in RcppArmadillo version 14.6.3-1 (2025-08-14)

  • Upgraded to Armadillo release 14.6.3 (Caffe Mocha)

    • Fix OpenMP related crashes in Cube::slice() on Arm64 CPUs

Changes in RcppArmadillo version 14.6.2-1 (2025-08-08) (GitHub Only)

  • Upgraded to Armadillo release 14.6.2 (Caffe Mocha)

    • Fix for corner-case speed regression in sum()

    • Better handling of OpenMP in omit_nan() and omit_nonfinite()

Changes in RcppArmadillo version 14.6.1-1 (2025-07-21) (GitHub Only)

  • Upgraded to Armadillo release 14.6.1 (Caffe Mocha)

    • Fix for speed regression in mean()

    • Fix for detection of compiler configuration

    • Use of pow optimization now optional

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 August, 2025 02:31PM

Reproducible Builds

Reproducible Builds summit 2025 to take place in Vienna

We are extremely pleased to announce the upcoming Reproducible Builds summit, which will take place from October 28th—30th 2025 in the historic city of Vienna, Austria.

This year, we are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Hamburg (2023—2024), Venice (2022), Marrakesh (2019), Paris (2018), Berlin (2017), Berlin (2016) and Athens (2015).

If you’re excited about joining us this year, please make sure to read the event page which has more details about the event and location. As in previous years, we will be sending invitations to all those who attended our previous summit events or expressed interest to do so. However, even if you do not receive a personal invitation, please do email the organizers and we will find a way to accommodate you.

About the event

The Reproducible Builds Summit is a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

With your help, we will bring this (and several other areas) into life:


The main seminar room.

Schedule

Although the exact content of the meeting will be shaped by the participants, the main goals will include:

  • Update & exchange about the status of reproducible builds in various projects.
  • Improve collaboration both between and inside projects.
  • Expand the scope and reach of reproducible builds to more projects.
  • Work together and hack on solutions.
  • Establish space for more strategic and long-term thinking than is possible in virtual channels.
  • Brainstorm designs on tools enabling users to get the most benefits from reproducible builds.
  • Discuss how reproducible builds will be usable and meaningful to users and developers alike.

Logs and minutes will be published after the meeting.

Location & date

Registration instructions

Please reach out if you’d like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

We look forward to what we anticipate to be yet another extraordinary event!

20 August, 2025 12:00AM

August 19, 2025

Russell Coker

Colmi P80 SmartWatch First Look

I just bought a Colmi P80 SmartWatch from Aliexpress for $26.11 based on this blog post reviewing it [1]. The main things I was after in this was a larger higher resolution screen because my vision has apparently deteriorated during the time I’ve been wearing a Pinetime [2] and I now can’t read messages on it when not wearing my reading glasses.

The watch hardware is quite OK. It has a larger and higher resolution screen and looks good. The review said that GadgetBridge (the FOSS SmartWatch software in the F-Droid repository) connected when told that the watch was a P79 and in a recent release got support for sending notifications. In my tests with GadgetBridge it doesn’t set the time, can’t seem to send notifications, can’t read the battery level, and seems not to do anything other than just say “connected”. So I installed the proprietary app, as an aside it’s a neat feature to have the watch display a QR code for installing the app, maybe InfiniTime should have a similar QR code for getting GadgetBridge from the F-Droid repository.

The proprietary app is quote OK for the basic functionality and a less technical relative who is using one is happy. For my use the proprietary app is utterly broken. One of my main uses is to get notifications of Jabber messages from the Conversations app (that’s in F-Droid). I have Conversations configured to always have a notification of how many accounts are connected which prevents Android from killing it, with GadgetBridge that notification isn’t reported but the actual message contents are (I don’t know how/why that happens) but with the Colmi app I get repeated notifcation messages on the watch about the accounts being connected. Also the proprietary app has on/off settings for messages to go to the watch for a hard coded list of 16 common apps and an “Others” setting for the rest. GadgetBridge lists the applications that are actually installed so I can configure it not to notify me about Reddit, connecting to my car audio, and many other less common notifications. I prefer the GadgetBridge option to have an allow-list for apps that I want notifications from but it also has a configuration option to use a deny list so you could have everything other than the app that gives lots of low value notifications. The proprietary app has a wide range of watch faces that it can send to the watch which is a nice feature that would be good to have in InfiniTime and GadgetBridge.

The P80 doesn’t display a code on screen when it is paired via Bluetooth so if you have multiple smart watches then you are at risk of connecting to the wrong one and there doesn’t seem to be anything stopping a hostile party from connecting to one. Note that hostile parties are not restricted to the normal maximum transmission power and can use a high gain antenna for reception so they can connect from longer distances than normal Bluetooth devices.

Conclusion

The Colmi P80 hardware is quite decent, the only downside is that the vibration has an annoying “tinny” feel. Strangely it has a rotation sensor for a rotating button (similar to analogue watches) but doesn’t seem to have a use for it as the touch screen does everything.

The watch firmware is quite OK (not great but adequate) but lacking a password for pairing is a significant lack.

The Colmi Android app has some serious issues that make it unusable for what I do and the release version of GadgetBridge doesn’t work with it, so I have gone back to the PineTime for actual use.

The PineTime cost twice as much, has less features (no sensor for O2 level in blood), but seems more solidly constructed.

I plan to continue using the P80 with GadgetBridge and Debian based SmartWatch software to help develop the Debian Mobile project. I expect that at some future time GadgetBridge and the programs written for non-Android Linux distributions will support the P80 and I will transition to it. I am confident that it will work well for me at some future time and that I will get $26.11 of value from it. At this time I recommend that people who do the sort of things I do get one of each and that less technical people get a Colmi P80.

19 August, 2025 10:31AM by etbe

August 18, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Phil Wyett evidence & Debian Zizian plagiarism, modern slavery tendencies

A new Debian crisis emerged on Sunday of the Debian Day weekend with an email from Phil Wyett (kathenas). He chose the subject line Ending Debian Contribution. The snobby people insulted him and he quit.

The email only received one reply. Nobody thanked Phil for his enormous work over an extended period of years. Once again, they chewed somebody up, spat him out and abandoned him at the side of the road.

The email was sent on 17 August in the UK time zone, that is, the anniversary of the book Animal Farm. Phil is in the UK.

When Phil applied to be a Debian Developer in December 2023, he received this very positive reference from Gianfranco Costamagna (locutusofborg):

For nm.debian.org, at 2023-07-24:
I support Philip Wyett 's request to become a Debian Developer, uploading.
I have worked with Philip Wyett on X and Y
for 2 years, and I consider them as having sufficient technical competence.

I don't usually advocate when I "just" sponsor less than 20 packages for a person.
I decided to gran him an exception, because:
1) he has strong motivation in keeping filezilla/libfilezilla in a good shape in Debian.
2) he is fast in acting and answer to questions/emails
3) he knows how ABI/API works, and he does "transition" the two packages successfully since
when he started working on them (its a 2 packages transition, but he knows his stuff!)
4) libfilezilla bumps ABI on each release, so being DM for him is probably painful.
5) he did work on other packages such as pingus, rednotebook.
6) I think having them as DD is a great value for Debian project.

I have personally worked with Philip Wyett 
(key 70A0AC45AC779EFE84F63AED724AA9B52F024C8B) for 2 years, and I know Philip Wyett
can be trusted to be a full member of Debian, and have unsupervised, unrestricted upload rights, right now.

Addendum:
For nm.debian.org, at 2024-01-22:
After the closing of the previous application, I asked Philip to do *many* Debian related things.
It turned out that the application was a little bit premature, Debian is not just
about having technical skills (and he strongly has them), but
also about helping newcomers, understanding how the community is built and how to interact each
other without loosing the objective that has to be to bring an OS to end users.
For this reason, after sponsoring a ton of packages reviewed by him on mentors.debian.org, and giving
them DM rights for the packages he maintains, I'm more confident about his ability to become
a full unsupervised DD.

In December 2024, a whole group of people all wrote positive references for Phil.

His Debian wiki profile, now deleted, included the comments:

Most of the money you see in and around Free / Open Source Software, Hardware and other projects is in the hands of large groups and self styled organisations, with little or none reaching the actually contributors/maintainer/developers of your favourite projects who actually do the core work.

Consider donating to projects and persons directly.

That must have annoyed the people who spend money without doing any work.

Quoting Phil's email from 17 August 2025:

In my opinion I have been messed about for over a year, discriminated against by one or more of the Debian Application Managers who overlooked my DD application for over 7 months whilst moving forward with other applications that were made after my own, one done in 10 days.

Looking through the history of mailing lists and the wiki (links below), we can see hundreds of messages from Phil as he works on his own packages and helps other people begin packaging with Debian. It looks like he has done more than enough work to have rights as a Debian Developer under copyright law.

The Debian Project Leader (DPL), Andreas Tille, obtained a PhD in Physics and works at the Robert Koch-Institute in Wernigerode, Germany. Surely he understands that in any publication, whether it is a scientific research paper or a piece of software, the names of all the authors/developers have to be listed with an equal status. Not including somebody's name is a close cousin of plagiarism. I've raised concerns about plagiarism in Debian before.

My next communication was from the DAMs closing my application and stating I could never apply again. The DPL has agreed with this decision.

If the Debian Account Managers were never going to respect his copyright status anyway, why did they encourage him to keep doing all this work over the years?

Their willingness to continue accepting his contributions and collaboration without giving him equal recognition reeks of exploitation. Exploitation is an element of any modern slavery prosecution. I have published a series of blogs looking at how exploitative situations arise in open source software.

There are two crucial pieces of evidence from the history of Enrico Zini, one of the Debian Account Managers:

First, in 2009, he asked Debian election candidates to each name five people they would like to censor and humiliate.

Please write a list of 5 Debian Developers you would like to kick out of the project.

Second, in 2018, Zini went to DebConf18 and gave a talk called "Multiple people" where he talks about having relationships with other men.

During the last year, I have been thinking passionately about things such as diversity, gender identity, sexual orientation, neurodiversity, and preserving identity in a group.

The last phrase, "preserving identity in a group", reveals a lot. Zini and other members of the group are screening Debian collaborators based on a very distorted worldview that only seems to tolerate the people they would be willing to sleep with.

Phil Wyett, being a former soldier and engineer, may have given the impression that he is not going to be an easy target for the social engineering culture and Code of Conduct gaslighting that has infected Debian in recent times.

Earlier this year there was significant publicity about the Zizian group. It is interesting to note that one of the Zizian victims was a border guard while Phil Wyett is a former soldier from the British Army's Royal Engineers.

David Maland, Phil Wyett, Debian, Zizian

 

US Border Patrol Agent David Maland, a US Air Force veteran, is pictured with a service dog. Phil Wyett, a British Army vet, is pictured with a regular house cat. Apart from the cat, they served their respective communities in a multitude of different ways.

Another Zizian victim was their landlord, Curtis Lind. The trial is ongoing. Mr Lind, pictured with a horse, lost one eye:

Curtis Lind, horse, Zizian

 

People who spend time developing software have rights under copyright law just as landlords have rights under real estate law.

The pattern of vicious attacks on authors, developers and landlords all suggest a significant disregard for the law and a sense of entitlement from people who operate in a pack, like dogs, barking at the rest of us.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Links, history of Phil Wyett's contributions

This is not a complete list.

Phil and Jay's web site.

Phil's blog.

Instagram.

Youtube.

Github

Gitlab

BuyMeACoffee.com profile and feedback from supporters

https://rpm.kathenas.org/ - Phil's RPM repository.

Phil's Ubuntu wiki profile has been edited between 2007 and 2009.

GNOME wiki profile and a snapshot in the Wayback machine tells us the page was first edited in 2014.

Phil's blog posts from 2017 where he mainly writes about CentOS.

CentOS wiki profile and a snapshot in the Wayback machine tells us the page was last edited in 2019.

Debian: kathenas-guest@alioth history from 2017 to 2019

Debian: kathenas history from 2017 to 2025

Debian wiki: history and old versions of the /PhilWyett page, which has been deleted. The page was first created in 2017 and maintained regularly until very recently.

Debian wiki: history and old versions of the /PhilWyett/DebianMentoring page, which has been deleted. The page was created in February 2025 and contains guidance for people Phil has been mentoring with the packaging process.

34 packages that Phil is maintaining (Wayback machine snapshot)

Debian new maintainer report - multiple attempts to be registered between February 2023 and December 2024

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Phil Wyett, kathenas, Debian, packages

 

Please see the chronological history of how the Debian harassment and abuse culture evolved.

18 August, 2025 09:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Amiga redux

Matthew blogged about his Amiga CDTV project, a truly unique Amiga hack which also manages to be a novel Doom project (no mean feat: it's a crowded space)

This re-awakened my dormant wish to muck around with my childhood Amiga some more. When I last wrote about it (four years ago ☹) I'd upgraded the disk drive emulator with an OLED display and rotary encoder. I'd forgotten to mention I'd also sourced a modern trapdoor RAM expansion which adds 2MiB of RAM. The Amiga can only see 1.5MiB1 of it at the moment, I need perform a mainboard modification to access the final 512kiB2, which means some soldering.

[Amiga Test Kit](https://github.com/keirf/Amiga-Stuff) showing 2MiB RAM

Amiga Test Kit showing 2MiB RAM

What I had planned to do back then: replace the switch in the left button of the original mouse, which was misbehaving; perform the aformentioned mainboard mod; upgrade the floppy emulator wiring to a ribbon cable with plug-and-socket, for easier removal; fit an RTC chip to the RAM expansion board to get clock support in the OS.

However much of that might be might be moot, because of two other mods I am considering,

PiStorm

I've re-considered the PiStorm accelerator mentioned in Matt's blog.

Four years ago, I'd passed over it, because it required you to run Linux on a Raspberry Pi, and then an m68k emulator as a user-space process under Linux. I didn't want to administer another Linux system, and I'm generally uncomfortable about using a regular Linux distribution on SD storage over the long term.

However in the intervening years Emu68, a bare-metal m68k emulator has risen to prominence. You boot the Pi straight into Emu68 without Linux in the middle. For some reason that's a lot more compelling to me.

The PiStorm enormously expands the RAM visible to the Amiga. There would be no point in doing the mainboard mod to add 512k (and I don't know how that would interact with the PiStorm). It also can provide virtual hard disk devices to the Amiga (backed by files on the SD card), meaning the floppy emulator would be superfluous.

Denise Mainboard

I've just learned about a truly incredible project: the Denise Mini-ITX Amiga mainboard. It fitss into a Mini-ITX case (I have a suitable one spare already). Some assembly required. You move the chips from the original Amiga over to the Denise mainboard. It's compatible with the PiStorm (or vice-versa). It supports PC-style PS/2 keyboards (I have a Model M in the loft, thanks again Simon) and has a bunch of other modern conveniences: onboard RTC; mini-ITX power (I'll need something like a picoPSU too)

It wouldn't support my trapdoor RAM card but it takes a 72-pin DIMM which can supply 2MiB of Chip RAM, and the PiStorm can do the rest (they're compatible3).

No stock at the moment but if I could get my hands on this, I could build something that could permanently live on my desk.


  1. the Boobip board's 1.5MiB is "chip" RAM: accessible to the other chips on the mainboard, with access mediated by the AGNUS chip.
  2. the final 512kiB is "Fast" RAM: only accessible to the CPU, not mediated via Agnus.
  3. confirmation

18 August, 2025 05:52AM

August 17, 2025

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

The Very Model of a Patriot Online

It appears that the fragile masculinity tech evangelists have identified Debian as a community with boundaries which exclude them from abusing its members and they’re so angry about it! In response to posts such as this, and inspired by Dr. Conway’s piece, I’ve composed a poem which, hopefully, correctly addresses the feelings of that crowd.


The Very Model of a Patriot Online

I am the very model of a modern patriot online,
My keyboard is my rifle and my noble cause is so divine.
I didn't learn my knowledge in a dusty college lecture hall,
But from the chans where bitter anonymity enthralls us all.
I spend a dozen hours every day upon my sacred quest,
To put the globo-homo narrative completely to the test.
My arguments are peer-reviewed by fellas in the comments section,
Which proves my every thesis is the model of complete perfection.
I’m steeped in righteous anger that the libs call 'white fragility,'
For mocking their new pronouns and their lack of masculinity.
I’m master of the epic troll, the comeback, and the searing snark,
A digital guerrilla who is fighting battles in the dark.

I know the secret symbols and the dog-whistles historical,
From Pepe the Frog to ‘Let’s Go Brandon,’ in order categorical;
In short, for fighting culture wars with rhetoric rhetorical,
I am the very model of a patriot polemical.

***

I stand for true expression, for the comics and the edgy clown,
Whose satire is too based for all the fragile folks in town.
They say my speech is 'violence' while my spirit they are trampling,
The way they try to silence me is really quite a startling sampling
Of 1984, which I've not read but thoroughly understand,
Is all about the tyranny that's gripping this once-blessed land.
My humor is a weapon, it’s a razor-bladed, sharp critique,
(Though sensitive elites will call my masterpiece a form of ‘hate speech’).
They cannot comprehend my need for freedom from all consequence,
They call it 'hate,' I call it 'jokes,' they just don't have a lick of sense.
So when they call me ‘bigot’ for the spicy memes I post pro bono,
I tell them their the ones who're cancelled, I'm the victim here, you know!

Then I can write a screed against the globalist cabal, you see,
And tell you every detail of their vile conspiracy.
In short, when I use logic that is flexible and personal,
I am the very model of a patriot controversial.

***

I'm very well acquainted with the scientific method, too,
It's watching lengthy YouTube vids until my face is turning blue.
I trust the heartfelt testimony of a tearful, blonde ex-nurse,
But what a paid fact-checker says has no effect and is perverse.
A PhD is proof that you've been brainwashed by the leftist mob,
While my own research on a meme is how I really do my job.
I know that masks will suffocate and vaccines are a devil's brew,
I learned it from a podcast host who used to sell brain-boosting goo.
He scorns the lamestream media, the CNNs and all the rest,
Whose biased reporting I've put fully to a rigorous test
By only reading headlines and confirming what I already knew,
Then posting my analysis for other patriots to view.

With every "study" that they cite from sources I can't stand to hear,
My own profound conclusions become ever more precisely clear.
In short, when I've debunked the experts with a confident "Says who?!",
I am the very model of a researcher who sees right through you.

***

But all these culture wars are just a sleight-of-hand, a clever feint,
To hide the stolen ballots and to cover up the moral taint
Of D.C. pizza parlors and of shipping crates from Wayfair, it’s true,
It's all connected in a plot against the likes of me and you!
I've analyzed the satellite photography and watermarks,
I understand the secret drops, the cryptic Qs, the coded sparks.
The “habbening” is coming, friends, just give it two more weeks or three,
When all the traitors face the trials for their wicked treachery.
They say that nothing happened and the dates have all gone past, you see,
But that's just disinformation from the globalist enemy!
Their moving goalposts constantly, a tactic that is plain to see,
To wear us down and make us doubt the coming, final victory!

My mind can see the patterns that a simple sheep could never find,
The hidden puppet-masters who are poisoning our heart and mind.
In short, when I link drag queens to the price of gas and child-trafficking,
I am the very model of a patriot whose brain is quickening!

***

My pickup truck's a testament to everything that I hold dear,
With vinyl decals saying things the liberals all hate and fear.
The Gadsden flag is waving next to one that's blue and starkly thin,
To show my deep respect for law, except the feds who're steeped in sin.
There's Punisher and Molon Labe, so that everybody knows
I'm not someone to trifle with when push to final shoving goes.
I've got my tactical assault gear sitting ready in the den,
Awaiting for the signal to restore our land with my fellow men.
I practice clearing rooms at home when my mom goes out to the store,
A modern Minuteman who's ready for a civil war.
The neighbors give me funny looks, I see them whisper and take note,
They'll see what's what when I'm the one who's guarding checkpoints by their throat.

I am a peaceful man, of course, but I am also pre-prepared,
To neutralize the threats of which the average citizen's unscared.
In short, when my whole identity's a brand of tactical accessory,
You'll say a better warrior has never graced a Cabela's registry.

***

They say I have to tolerate a man who thinks he is a dame,
While feminists and immigrants are putting out my vital flame!
There taking all the jobs from us and giving them to folks who kneel,
And "woke HR" says my best jokes are things I'm not allowed to feel!
An Alpha Male is what I am, a lion, though I'm in this cubicle,
My life's frustrations can be traced to policies Talmudical.
They lecture me on privilege, I, who have to pay my bills and rent!
While they give handouts to the lazy, worthless, and incompetent!
My grandad fought the Nazis! Now I have to press a key for ‘one’
To get a call-rep I can't understand beneath the blazing sun
Of global, corporate tyranny that's crushing out the very soul
Of men like me, who've lost their rightful, natural, and just control!

So yes, I am resentful! And I'm angry! And I'm right to be!
They've stolen all my heritage and my masculinity!
In short, when my own failures are somebody else's evil plot,
I am the very model of the truest patriot we've got!

***

There putting chips inside of you! Their spraying things up in the sky!
They want to make you EAT THE BUGS and watch your very spirit die!
The towers for the 5G are a mind-control delivery tool!
To keep you docile while the children suffer in a grooming school!
The WEF, and Gates, and Soros have a plan they call the 'Great Reset,'
You'll own no property and you'll be happy, or you'll be in debt
To social credit overlords who'll track your every single deed!
There sterilizing you with plastics that they've hidden in the feed!
The world is flat! The moon is fake! The dinosaurs were just a lie!
And every major tragedy's a hoax with actors paid to cry!
I'M NOT INSANE! I SEE THE TRUTH! MY EYES ARE OPEN! CAN'T YOU SEE?!
YOU'RE ALL ASLEEP! YOU'RE COWARDS! YOU'RE AFRAID OF BEING TRULY FREE!

My heart is beating faster now, my breath is short, my vision's blurred,
From all the shocking truth that's in each single, solitary word!
I've sacrificed my life and friends to bring this message to the light, so...
You'd better listen to me now with all your concentrated might, ho!

***

For my heroic struggle, though it's cosmic and it's biblical,
Is waged inside the comments of a post that's algorithm-ical.
And still for all my knowledge that's both tactical and practical,
My mom just wants the rent I owe and says I'm being dramatical.

17 August, 2025 09:21AM by C.J. Collier

August 16, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian turns 32!

Alt 32th Debian Day by Daniel Lenharo

On August 16, 1993, Ian Murdock announced the Debian Project to the world. Three decades (and a bit) later, Debian is still going strong, built by a worldwide community of developers, contributors, and users who believe in a free, universal operating system.

Over the years, Debian has powered servers, desktops, tiny embedded devices, and huge supercomputers. We have gathered at DebConfs, squashed countless bugs, shared late-night hacking sessions, and helped keep millions of systems secure.

Debian Day is a great excuse to get together, whether it is a local meetup, an online event, a bug squashing party, a team sprint or just coffee with fellow Debianites. Check out the Debian Day wiki to see if there is a celebration near you or to add your own.

Here is to 32 years of collaboration, code, and community, and to all the amazing people who make Debian what it is.

Happy Debian Day!

16 August, 2025 09:10AM by Debian Publicity Team

Birger Schacht

Updates and additions in Debian 13 Trixie

Last week Debian 13 (Trixie) was released and there have been some updates and additions in the packages that I maintain, that I wanted to write about. I think they are not worth of being added to the release notes, but I still wanted to list some of the changes and some of the new packages.

sway

Sway, the tiling Wayland compositor was version 1.7 in Bookworm. It was updated to version 1.10 (and 1.11 is already in experimental and waiting for an upload to unstable). This new version of sway brings, among a lot of other features, updated support for touchpad gestures and support for the ext-session-lock-v1 protocol, which allows for more robust and secure screen locking. The configuration snippet that activates the default sway background is now shipped in the sway-backgrounds package instead of being part of the sway package itself.

The default menu application was changed from dmenu to wmenu. wmenu is a Wayland native alternative to dmenu which I packaged and it is now recommended by sway.

There are some small helper tools for sway that were updated: swaybg was bumped from 1.2.0 to 1.2.1, swaylock was bumped from 1.7.2 to 1.8.2.

The grimshot script, which is a script for making screenshots, was part of the sway’s contrib folder for a long time (but was shipped as a separate binary package). It was removed from sway and is now part of the sway-contrib project. There are some other useful utilities in this source package that I might package in the future.

slurp, which is used by grimshot to select a region, was updated from version 1.4 to version 1.5.

labwc

I uploaded the first labwc package two years ago and I’m happy it is now part of a stable Debian release. Labwc is also based on wlroots, like sway. It is a window-stacking compositor and is inspired by openbox. I used openbox for a long time back in the day before I moved to i3 and I’m very happy to see that there is a Wayland alternative.

foot

Foot is a minimalistic and fast Wayland terminal emulator. It is mostly keyboard driven. foot was updated from version 1.13.1 to 1.21.0. The probably most important change for users updating might be the fact that:

  • Control+Shift+u is now bound to unicode-input instead of show- urls-launch, to follow the convention established in GTK and Qt
  • show-urls-launch now bound to Control+Shift+o

et cetera

The Wayland kiosk cage was updated from 0.1.4 to 0.2.0.

The waybar bar for wlroots compositors was updated from 0.9.17 to 0.12.0.

swayimg was updated from 1.10 to 3.8 and now brings support for custom key bindings, support for additional image types (PNM, EXR, DICOM, Farbfeld, sixel) and a gallery mode.

tofi, another dmenu replacement was updated from 0.8.1 to 0.9.1, wf-recorder a tool for screen recording in wlroots-based compositors, was updated from version 0.3 to version 0.5.0. wlogout was updated from version 1.1.1 to 1.2.2. The application launcher wofi was updated from 1.3 to 1.4.1. The lightweight status panel yambar was updated from version 1.9 to 1.11. kanshi, the tool for managing and automatically switching your output profiles, was updated from version 1.3.1 to version 1.5.1.

usbguard was updated from version 1.1.2 to 1.1.3.

added

  • fnott - a lightweight notification daemon for wlroots based compositors
  • fyi - a utility to send notifications to a notification daemon, similar to notify-send
  • pipectl - a tool to create and manage short-lived named pipes, this is a dependency of wl-present. wl-present is a script around wl-mirror which implements output mirroring for wlroots-based compositors
  • poweralertd - a small daemon that notifies you about the power status of your battery powered devices
  • wlopm - control power management of outputs
  • wlrctl - command line utility for miscellaneous wlroots Wayland extensions
  • wmenu - already mentioned, the new default launcher of sway
  • wshowkeys - shows keypresses in wayland sessions, nice for debugging
  • libsfdo - libraries implementing some freedesktop.org specs, used by labwc

16 August, 2025 05:28AM

August 15, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Abstract algebra structures made easy

Group theory, and abstract algebra in general, has many useful properties; you can take a bunch of really common systems and prove very useful statements that hold for all of them at once.

But sometimes in computer science, we just use the names, not really the theorems. If you're showing that something is a group) and then proceed to use Fermat's little theorem (perhaps to efficiently compute inverses, when it's not at all obvious what they would be), then you really can't go without the theory. But for some cases, we just love to be succinct in our description of things, and for outsiders, it's just… not useful.

So here's Steinar's easy (and more importantly, highly non-scientific; no emails about inaccuracies, please :-) ) guide to the most common abstract algebra structures:

  • Set: Hopefully you already know what this is. A collection of things (for instance numbers).
  • Semigroup: A (binary) operation that isn't crazy.
  • Monoid: An operation, but you also have a no-op.
  • Group: An operation, but you also have the opposite operation.
  • Abelian group: An operation, but the order doesn't matter.
  • Ring: Two operations; the Abelian group got a friend for Christmas. The extra operation might be kind of weird (for instance, has no-ops but might not always have opposites).
  • Field: A ring with some extra flexibility, so you can do almost whatever you are used to doing with “normal” (real) numbers except perhaps order them.

So for instance, assuming that x and y are e.g. positive integers (including zero), then max(x,y) (the motivating example for this post) is a monoid. Why? Because it's a non-crazy binary operation (in particular, max(max(x,y),z) = max(x,max(y,z))), and you can use x=0 or y=0 as a no-op (max(anything, 0) = anything). But it's not a group, because once you've done max(x,y), there's nothing you can max() with to get the smallest value back.

There are many more, but these are the ones you get today.

15 August, 2025 06:31PM

hackergotchi for Daniel Pocock

Daniel Pocock

Fête de l’Assomption, procession à Fourvière, Lyon, France

Donald Trump a peut-être mis un terme à la diversité, mais la fête de Marie demeure sacrée. Les Lyonnais ont bravé la canicule pour porter la statue de Marie sur la colline de Fourvière.

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7021.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7024.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7025.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7029.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7031.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7037.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7045.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7049.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7050.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7051.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7062.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7067.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7078.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7086.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7095.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7100.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7103.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7116.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7125.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7129.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7130.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7134.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7137.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7139.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7141.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7143.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7147.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7151.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7152.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7153.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7155.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7156.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7158.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7159.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7160.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7163.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7165.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7167.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7168.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7174.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7176.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7183.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7184.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7190.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7202.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7209.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7210.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7217.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7221.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7224.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7225.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7227.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7231.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7235.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7248.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7257.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7264.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7267.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7268.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7274.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7279.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7281.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7284.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7286.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7287.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7293.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7296.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7298.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7299.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7302.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7303.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7310.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7313.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7321.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7322.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7323.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7331.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7332.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7335.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7336.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7340.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7346.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7349.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7351.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7353.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7357.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7358.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7365.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7372.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7376.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7378.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7380.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7385.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7387.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7391.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7394.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7395.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7404.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7405.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7406.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7413.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7419.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7420.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7422.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7429.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7440.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7443.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7447.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7448.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7449.JPG


 

Blogs connexes sur l'église

Catholic.Community

Veuillez suivre le site web de Catholic.Community et en faire votre page d'accueil.

15 August, 2025 07:30AM

Feast of the Assumption, procession at Fourvière, Lyon, France

Donald Trump may have stopped diversity in its tracks but Mary's day remains sacred. The citizens of Lyon, France, braved a heat wave (canicule) to carry Mary's statue up the hill to the Fourvière.

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7021.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7024.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7025.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7029.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7031.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7037.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7045.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7049.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7050.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7051.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7062.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7067.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7078.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7086.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7095.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7100.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7103.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7116.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7125.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7129.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7130.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7134.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7137.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7139.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7141.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7143.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7147.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7151.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7152.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7153.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7155.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7156.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7158.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7159.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7160.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7163.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7165.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7167.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7168.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7174.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7176.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7183.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7184.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7190.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7202.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7209.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7210.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7217.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7221.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7224.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7225.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7227.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7231.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7235.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7248.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7257.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7264.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7267.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7268.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7274.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7279.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7281.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7284.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7286.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7287.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7293.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7296.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7298.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7299.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7302.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7303.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7310.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7313.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7321.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7322.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7323.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7331.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7332.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7335.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7336.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7340.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7346.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7349.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7351.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7353.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7357.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7358.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7365.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7372.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7376.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7378.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7380.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7385.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7387.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7391.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7394.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7395.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7404.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7405.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7406.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7413.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7419.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7420.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7422.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7429.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7440.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7443.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7447.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7448.JPG


 

Assumption, Procession, Monseigneur Patrick Le Gal, Fourvière, Lyon, France, 2025

Filename: DSC_7449.JPG


 

Related blogs about the church

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

15 August, 2025 07:30AM

August 14, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 4: openWakeWord

People keep asking me when I’ll write the next instalment of my local voice assistant journey. I didn’t mean for it to be so long since the last one, things have been busier than I’d like. Anyway. Last time we’d built Tensorflow, so now it’s time to sort out openWakeWord. As a reminder we’re trying to put a local voice satellite on my living room Debian media machine.

The point of openWakeWord is to run on the machine the microphone is connected to, listening for the wake phrase (“Hey Jarvis” in my case), and only then calling back to the central server to do a speech to text operation. It’s wrapped up for Wyoming as wyoming-openwakeword.

Of course I’ve packaged it up - available at https://salsa.debian.org/noodles/wyoming-openwakeword. Trixie only released yesterday, so I’m still running all of this on bookworm. That means you need python3-wyoming from Trixie - 1.6.0-1 will install fine without needing rebuilt - and the python3-tflite-runtime we built last time.

Like the other pieces I’m not sure about how this could land in Debian; it’s unclear to me that the pre-trained models provided would be accepted in main.

As usual I start it with with a systemd unit file dropped in /etc/systemd/service/wyoming-openwakeword.service:

[Unit]
Description=Wyoming OpenWakeWord server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=/usr/bin/wyoming-openwakeword --uri tcp://[::1]:10400/ --preload-model 'hey_jarvis' --threshold 0.8

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

I’m still playing with the threshold level. It defaults to 0.5, but the device lives under the TV and seems to get a bit confused by it sometimes. There’s some talk about using speex for noise suppression, but I haven’t explored that yet (it’s yet another Python module to bind to the C libraries I’d have to look at).

This is a short one; next post is actually building the local satellite on top to tie everything together.

14 August, 2025 07:07PM

August 13, 2025

Sven Hoexter

Automated Browsing with Gemini and Chrome via BrowserMCP and gemini-cli

Brief dump so I don't forget how that worked in August 2025. Requires npm, npx and nodejs.

  1. Install Chrome
  2. Add the BrowserMCP extension
  3. Install gemini-cli npm install -g @google/gemini-cli
  4. Retrieve a Gemini API key via AI Studio
  5. Export API key for gemini-cli export GEMINI_API_KEY=2342
  6. Start BrowserMCP extension, see manual, an info box will appear that it's active with a cancel button.
  7. Add mcp server to gemini-cli gemini mcp add browsermcp npx @browsermcp/mcp@latest
  8. Start gemini-cli, let it use the mcp server and task it to open a website.

13 August, 2025 12:21PM

August 10, 2025

hackergotchi for Jonathan Carter

Jonathan Carter

Debian 13

Debian 13 has finally been released!

One of the biggest and under-hyped features is support for HTTP Boot. This allows you to simply specify a URL (to any d-i or live image iso) in your computer’s firmware setup and then you can boot to it directly over the Internet, so no need to download an image, write it to flash disk and then boot from the flash disk on computers made in the last ~5 years. This is also supported on the Tianocore free EFI firmware, which is useful if you’d like to try it out on QEMU/KVM.

More details about Debian 13 available on the official press release.

The default theme for Debian 13 is Ceratopsian, designed by Elise Couper. I’ll be honest, I wasn’t 100% sure it was the best choice when it won the artwork vote, but it really grew on me over the last few months, and it looked great in combination with all kinds of other things during DebConf too, so it has certainly won me over.

And I particularly like the Plymouth theme. It’s very minimal, and it reminds me of the Toy Story Trixie character, it’s almost like it helps explain the theme:

Plymouth (start-up/shutdown) theme.

Trixie, the character from Toy Story that was chosen as the codename for Debian 13.

Debian Local Team ISO testing

Yesterday we got some locals together for ISO testing and we got a cake with the wallpaper printed on it, along with our local team logo which has been a work in progress for the last 3 years, so hopefully we’ll finalise it this year! (it will be ready when it’s ready). It came out a lot bluer than the original wallpaper, but still tasted great.

For many releases, I’ve been the only person from South Africa doing ISO smoke-testing, and this time was quite different, since everyone else in the photo below tested an image except for me. I basically just provided some support and helped out with getting salsa/wiki accounts and some troubleshooting. It went nice and fast, and it’s always a big relief when there are no showstoppers for the release.

My dog was really wishing hard that the cake would slip off.

Packaging-wise, I only have one big new package for Trixie, and that’s Cambalache, a rapid application design UI builder for GTK3/GTK4.

The version in trixie is 0.94.1-3 and version 1.0 was recently released, so I’ll get that updated in forky and backport it if possible.

I was originally considering using Cambalache for an installer UI, but ended up going with a web front-end instead. But that’s moving firmly towards forky territory, so more on that another time!

Thanks to everyone who was involved in this release, so far upgrades have been very smooth!

10 August, 2025 02:53PM by jonathan

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Upgrading Proxmox 7 to 8

Some variant of the following[1] worked for me.

The first line is the start of a for loop that runs on each node in my cluster a command using ssh. The argument -t is passed to attach a controlling terminal to STDIN, STDERR and STDOUT of this session, since there will not be an intervening shell to do it for us. The argument to ssh is a workflow of bash commands. They upgrade the 7.x system to the most recent packages on the repository. We then update the sources.list entries for the system to point at bookworm sources instead of bullseye. The package cache is updated and the proxmox-ve package is installed. Packages which are installed are upgraded to the versions from bookworm, and the installer concludes.

Dear reader, you might be surprised how many times I saw the word “perl” scroll by during the manual, serial scrolling of this install. It took hours. There were a few prompts, so stand by the keyboard!

[1]

gpg: key 1140AF8F639E0C39: public key "Proxmox Bookworm Release Key " imported
# have your ssh agent keychain running and a key loaded that's installed at 
# ~root/.ssh/authorized_keys on each node 
apt-get install -y keychain
eval $(keychain --eval)
ssh-add ~/.ssh/id_rsa
# Replace the IP address prefix (100.64.79.) and  suffixes (64, 121-128)
# with the actual IPs of your cluster nodes.  Or use hostnames :-)
for o in 64 121 122 123 124 125 126 127 128 ; do   ssh -t root@100.64.79.$o '
  sed -i -e s/bullseye/bookworm/g /etc/apt/sources.list $(compgen -G "/etc/apt/sources.listd.d/*.list") \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
    | dd of=/etc/apt/sources.list.d/proxmox-release.list status=none \
  && echo "deb [signed-by=/usr/share/keyrings/proxmox-release.gpg] http://download.proxmox.com/debian/ceph-quincy bookworm main no-subscription" \
    | dd of=/etc/apt/sources.list.d/ceph.list status=none \
  && proxmox_keyid="0xf4e136c67cdce41ae6de6fc81140af8f639e0c39" \
  && curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=${proxmox_keyid}" \
    | gpg --dearmor -o /usr/share/keyrings/proxmox-release.gpg  \
  && apt-get -y -qq update \
  && apt-get -y -qq install proxmox-ve \
  && apt-get -y -qq full-upgrade \
  && echo "$(hostname) upgraded"'; done

10 August, 2025 06:48AM by C.J. Collier

August 09, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian stable is now Debian 13 "trixie"!

Alt trixie has been released

We are pleased to announce the official release of Debian 13, codenamed trixie!

What's New in Debian 13

  • Official support for RISC-V (64-bit riscv64), a major architecture milestone
  • Enhanced security through ROP and COP/JOP hardening on both amd64 and arm64 (Intel CET and ARM PAC/BTI support)
  • HTTP Boot support in Debian Installer and Live images for UEFI/U-Boot systems
  • Upgraded software stack: GNOME 48, KDE Plasma 6, Linux kernel 6.12 LTS, GCC 14.2, Python 3.13, and more

Want to install it?

Fresh installation ISOs are now available, including the final Debian Installer featuring kernel 6.12.38 and mirror improvements. Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade?

Full upgrade path from Debian 12 "bookworm" is supported and documented in the Release Notes. Upgrade notes cover APT source preparation, handling obsoletes, and ensuring system resilience.

Additional Information

For full details, including upgrade instructions, known issues, and contributors, see the official Release Notes for Debian 13 "trixie".

Congratulations to all developers, QA testers, and volunteers who made Debian 13 "trixie" possible!

Do you want to celebrate the release?

To celebrate with us on this occassion find a release party near to you and if there isn't any, organize one!

09 August, 2025 09:30PM by Anupa Ann Joseph

hackergotchi for David Bremner

David Bremner

Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch

Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING

The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex?

I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex

For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.

Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
∞ (git) 0 % 57%

In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead.

Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).

$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail

To get new mail, I do something like

# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to ${dest} (somehow).
$ notmuch new
$ git -C $HOME/Maildir add ${folder}
$ git -C $HOME/Maildir diff-index --quiet HEAD ${folder} || git -C $HOME/Maildir commit -m 'mail delivery'

The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags.

There is some git configuration that can speed up the "git add" above, namely

$ git config core.untrackedCache true
$ git config core.fsmonitor true

See git-status(1) under "UNTRACKED FILES AND PERFORMANCE"

Defining notmuch as a git remote

Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.

$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database

The --allow-unrelated should be needed only the first time.

In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database).

Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo

*.orig
*~

This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database.

To push the tags from git to notmuch, you can run

$ git push database master

You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata).

git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows

$ git config remote.database.annex-push false

Unsticking the database remote

If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.

$ git remote rm database
$ git update-rf -d notmuch/master
$ rm -r .git/notmuch

Fine tuning notmuch config

  • In order to avoid dealing with file renames, I have

      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:

       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

09 August, 2025 12:21PM

Thorsten Alteholz

My Debian Activities in July 2025

Debian LTS

This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
  • [DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
  • [DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
  • [DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
  • [#1106867] kmail-account-wizard was marked as accepted

I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn’t do as much work as planned.

Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.

Debian Printing

This month I uploaded a new upstream version of:

Guess what, I also started to work on a new version of hplip and intend to upload it in August.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new upstream versions of:

  • supernovas (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don’t be afraid of them, they don’t bite and are happy to be released to a closed state.

FTP master

The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.

09 August, 2025 11:35AM by alteholz

August 06, 2025

Reproducible Builds

Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025

We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds an official goal for SUSE Enterprise Linux

On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:

[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. […] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.


Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, was introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.


New OSS Rebuild project from Google

The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, “a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts.” As the post itself documents, the new project comprises four facets:

  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.

One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of “semantic” reproducibility:

Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).

The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.


New extension of Python setuptools to support reproducible builds

Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools.

Called setuptools-reproducible, the project’s README file contains the following:

Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:

  • Improvements:

    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. []
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:

    • Don’t check for PyPDF version 3 specifically, check for versions greater than 3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Mask stderr from extract-vmlinux script. [][]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:

    • Use our_check_output in the ODT comparator. []
    • Update copyright years. []

In addition:

Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. []

Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. []


New library to patch system functions for reproducibility

Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project’s README:

libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.

Describing why he wrote it, Nicolas writes:

I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don’t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.


Independently Reproducible Git Bundles

Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:

One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.


Website updates

Once again, there were a number of improvements made to our website this month including:


Distribution work

In Debian this month:

Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian’s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.


The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold — congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.


In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [], 2.4 [] and 2.6 [].


Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:

  • Switch the URL for the Tails package set. []
  • Make the dsa-check-packages output more useful. []
  • Setup the ppc64el architecture again, has it has returned — this time with a 2.7 GiB database instead of 72 GiB. []

In addition, Jochen Sprickerhof improved the reproducibility statistics generation:

  • Enable caching of statistics. [][][]
  • Add some common non-reproducible patterns. []
  • Change output to directory. []
  • Add a page sorted by diffoscope size. [][]
  • Switch to Python’s argparse module and separate output(). []

Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:

  • Config files and scripts for a simple one machine setup. [][]
  • Create a rebuilderd user. []
  • Create rebuilderd-worker user with sbuild. []

Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [] and Vagrant Cascadian performed some node maintenance [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

There were a number of other patches from openSUSE developers:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 August, 2025 08:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in July 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

DebConf

I attended DebConf for the first time in 11 years (my last one was DebConf 14 in Portland). It was great! For once I had a conference where I had a fairly light load of things I absolutely had to do, so I was able to spend time catching up with old friends, making some new friends, and doing some volunteering - a bit of Front Desk, and quite a lot of video team work where I got to play with sound desks and such. Apparently one of the BoFs (“birds of a feather”, i.e. relatively open discussion sessions) where I was talkmeister managed to break the automatic video cutting system by starting and ending precisely on time, to the second, which I’m told has never happened before. I’ll take that.

I gave a talk about Debusine, along with helping Enrico run a Debusine BoF. We still need to process some of the feedback from this, but are generally pretty thrilled about the reception. My personal highlight was getting a shout-out in a talk from CERN (in the slide starting at 32:55).

Other highlights for me included a Python team BoF, Ian’s tag2upload talk and some very useful follow-up discussions, a session on archive-wide testing, a somewhat brain-melting whiteboard session about the “multiarch interpreter problem”, several useful discussions about salsa.debian.org, Matthew’s talk on how Wikimedia automates their Debian package builds, and many others. I hope I can start attending regularly again!

OpenSSH

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, and after a little testing in a container I confirmed that this was a reproducible problem that would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. (OpenSSH 10.0 further split sshd-session, adding an sshd-auth process that deals with the user authentication phase of the protocol.) This hardens the OpenSSH server by using different address spaces for privileged and unprivileged code.

    Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it. After this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen in two phases: first we unpack the new files onto disk, and then we run some package-specific configuration steps which usually include things like restarting services. (I’m simplifying, but this is good enough for this post.) Normally this is fine, and in fact desirable: the old service keeps on working, and this approach often allows breaking what would otherwise be difficult cycles by ensuring that the system is in a more coherent state before trying to restart services. However, in this case, unpacking the new files onto disk immediately means that new SSH connections no longer work: the old sshd receives the connection and tries to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this.

    If you’re just upgrading OpenSSH on its own or with a small number of other packages, this isn’t much of a problem as the listener will be restarted quite soon; but if you’re upgrading from bookworm to trixie, there may be a long gap when you can’t SSH to the system any more, and if something fails in the middle of the upgrade then you could be in trouble.

    So, what to do? I considered keeping a copy of the old sshd around temporarily and patching the new sshd to re-execute it if it’s being run to handle an incoming connection, but that turned out to fail in my first test: dependencies are normally only checked when configuring a package, so it’s possible to unpack openssh-server before unpacking a newer libc6 that it depends on, at which point you can’t execute the new sshd at all. (That also means that the approach of restarting the service at unpack time instead of configure time is a non-starter.) We needed a different idea.

    dpkg, the core Debian package manager, has a specialized facility called “diversions”: you can tell it that when it’s unpacking a particular file it should put it somewhere else instead. This is normally used by administrators when they want to install a locally-modified version of a particular file at their own risk, or by packages that knowingly override a file normally provided by some other package. However, in this case it turns out to be useful for openssh-server to temporarily divert one of its own files! When upgrading from before 9.8, it now diverts /usr/sbin/sshd to /usr/sbin/sshd.session-split before the new version is unpacked, then removes the diversion and moves the new file into place once it’s ready to restart the service; this reduces the period when incoming connections fail to a minimum. (We actually have to pretend that the diversion is being performed on behalf of a slightly different package since we’re using dpkg-divert in a strange way here, but it all works.)

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, which means that as soon as you unpack the new libssl3 during an upgrade (actually libssl3t64 due to the 64-bit time_t transition), sshd stops working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL. And time was tight if we wanted to maximize the chance that people would apply that stable update before upgrading to trixie; there isn’t going to be another point release of Debian 12 before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted my proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine. Many thanks to Manfred for reporting this with just enough time to spare that we were able to fix it before Debian 13 is released in a few days!

debmirror

I did my twice-yearly refresh of debmirror’s mirror_size documentation, and applied a patch from Christoph Goehre to improve mirroring of installer files.

madison-lite

I proposed renaming this project along with the rmadison tool in devscripts, although I’m not yet sure what a good replacement name would be.

Python team

I upgraded python-expandvars, python-typing-extensions (in experimental), and webtest to new upstream versions.

I backported fixes for some security vulnerabilities to unstable:

I fixed or helped to fix a number of release-critical bugs:

I fixed some other bugs, mostly Severity: important:

I reinstated python3-mastodon’s build-dependency on and recommendation of python3-blurhash, now that the latter has been fixed to use the correct upstream source.

06 August, 2025 10:41AM by Colin Watson

hackergotchi for Matthew Palmer

Matthew Palmer

I'm trying an open source funding experiment

As I’m currently somewhat underemployed, and could do with some extra income, I’m starting an open source crowd-funding experiment. My hypothesis is that the open source community, and perhaps a community-minded company or two, really wants more open source code in the world, and is willing to put a few dollars my way to make that happen.

To begin with, I’m asking for contributions to implement a bunch of feature requests on action-validator, a Rust CLI tool I wrote to validate the syntax of GitHub actions and workflows. The premise is quite simple: for every AU$150 (about US$100) I receive in donations, I’ll implement one of the nominated feature requests. If people want a particular feature implemented, they can nominate a feature in their donation message, otherwise when “general” donations get to AU$150, I’ll just pick a feature that looks interesting. More details are on my code fund page.

In the same spirit of simplicity, donations can be made through my Ko-fi page, and I’ll keep track of the various totals in a hand-written HTML table.

So, in short, if you want more open source code to exist, now would be a good time to visit my Ko-fi page and chip in a few dollars. If you’re curious to know more, my code fund page has a list of Foreseeably Anticipated Questions that might address your curiosity. Otherwise, ask your questions in the comments or email me.

06 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 05, 2025

hackergotchi for Thomas Lange

Thomas Lange

FAIme service new features: Linux Mint support and data storage for USB

Build your own customized Linux Mint ISO

Using the FAIme service [1] you can now build your own customized installation ISO for Xfce edition of Linux Mint 22.1 'Xia'.

You can select the language, add a list of additional packages, set the username and passwords. In the advanced settings you may add your ssh public key, some grub option and add a postinst script to be executed.

Add writable data partition for USB sticks

For all variants of ISOs (all live and all install ISOs) you can add a data partition to the ISO by just clicking a checkbox. This writable partition can be used when booting from USB stick. FAI will use it to search for a config space and to store the logs when this partition is detected.

The logs will be stored in the subdirectory logs on this partition. For using a different config space than the one on the ISO (which is read only) create a subdirectory config and copy a FAI config space into that directory. Then set FAI_CONFIG_SRC=detect:// (which is the default) and FAI will search for a config space on the data partition and uses this. More info about this [2]

You can also store some local packages in your config space, which will be installed automatically, without the need of recreating the ISO.

05 August, 2025 10:11AM

hackergotchi for Matthew Garrett

Matthew Garrett

Cordoomceps - replacing an Amiga's brain with Doom

There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comment count unavailable comments

05 August, 2025 12:30AM

August 03, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Jubilee of Digital Missionaries, Catholic Influencers concert (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6823.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6824.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6825.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6827.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6828.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6829.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6830.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6831.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6832.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6833.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6834.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6835.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6836.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6837.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6840.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6842.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6844.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6852.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6857.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6863.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6867.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6868.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6869.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6870.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6880.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6882.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6885.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6887.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6894.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6896.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6900.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6903.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6904.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6911.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6915.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6917.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6920.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6922.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6923.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6924.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6931.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6933.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6934.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6935.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6939.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6940.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6948.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6951.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6954.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6957.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6958.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6960.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

Jubilee of Digital Missionaries, Vatican gardens (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6645.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6647.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6648.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6649.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6650.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6651.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6652.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6653.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6655.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6659.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6661.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6662.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6666.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6667.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6668.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6678.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6679.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6680.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6682.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6683.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6684.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6685.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6686.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6687.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6688.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6690.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6691.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6692.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6693.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6694.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6695.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6700.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6705.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6711.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6730.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6737.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6738.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6745.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6760.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6766.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6770.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6771.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6780.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6784.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6785.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6787.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6788.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6790.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6797.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6798.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6803.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6815.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6818.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6819.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6820.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6821.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6822.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in July 2025

In July I attended DebCamp and DebConf in Brest, France. I very much enjoyed the opportunity to reconnect with other Debian contributors in person. I had a number of interesting and fruitful conversations there, besides the formally organised BoFs and talks.

I also gave my own talk on What’s new in the Linux kernel (and what’s missing in Debian).

Here’s the usual categorisation of activity:

03 August, 2025 01:27PM by Ben Hutchings

August 02, 2025

Russell Coker

Server CPU Sockets

I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.

Socket LGA2011-v3

The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.

The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.

The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.

AMD

There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.

Socket LGA1151

Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.

The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.

Socket LGA2066

The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].

Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.

Socket LGA3647

Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.

This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.

I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.

DDR5

I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.

CPU Benchmark Results

Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.

CPU Single Thread Multi Thread TDP
E5-2683 v4 1,713 17,591 120W
Xeon Gold 5120 1,755 18,251 105W
i9-9900K 2,919 18,152 95W
E5-2697A v4 2,106 21,610 145W
E5-2699A v4 2,055 26,939 145W
W-3265 2,572 30,105 205W
W-2295 2,642 30,924 165W
i9-10980XE 2,662 32,397 165W
Xeon Gold 6258R 2,080 40,252 205W

02 August, 2025 11:43AM by etbe

August 01, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 9/n

Context

Sidequest: Install u-boot-menu

Don't be like me and reboot without configuring u-boot-menu. Although the defaults make sense for most people, in my case I lost access to the serial console (because that custom config needed to be re-done), and the default delay was not enough to choose a backup kernel. In consfigurator notation:

  (on-change
      (file:has-content "/etc/u-boot-menu/conf.d/reform.conf"
        '("U_BOOT_TIMEOUT=50"
          "U_BOOT_PARAMETERS=\"ro no_console_suspend cryptomgr.notests \\${bootargs} console=ttyS2,1500000 keep_bootcon console=tty1\""))
    (cmd:single "u-boot-update"))

The panel, alive.

  • Thanks to a hint from joschc (and a bit of luck) I realized the
    issue I filed was nonsense. Yay?

  • The panel driver is not added by the rk3588 patches (since the build process applies all the patches, this is not a problem for building from reform-debian-packages).

  • After applying the two patches in reform-debian-packages/linux/patches6.15/imx8mp-mnt-pocket-reform/pocket-panel, the patched 6.16 kernel boots, and seems to work, including the panel.

  • The updated source is on branch reform-patches at

    https://salsa.debian.org/bremner/collabora-rockchip-3588

  • Unsurprisingly hibernate is not working out of the box with 6.16. My next mission is to apply the recommended pci-reset patches on top of 6.16.

previous episode next episode

01 August, 2025 11:20PM

Hibernate on the pocket reform 10/n

Context

Finally applying the pci reset series.

$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git am -3 v6_20250715_manivannan_sadhasivam_pci_add_support_for_resetting_the_root_ports_in_a_platform_specifi.mbx

There is quite a scary looking conflict between the last patch in the series and https://lore.kernel.org/r/1744940759-23823-1-git-send-email-shawn.lin@rock-chips.com which is now upstream (collabora) in rockchip-devel. I resolved the second basically by taking both, as it seemed like two independent sets of additions to the same parts of the file. The first it looks like Shawn's commit referenced above should prevail.

  • If anyone is curious about the (possibly incorrectly) rebased patches, they are at

    https://salsa.debian.org/bremner/collabora-rockchip-3588

    (reform-patches is the default, and relevant branch).

testing

  • The new (6.16~rc7+) kernel boots
  • It successfully reboots

  • devices test passes, although the UBSAN warning / error is still there

 174.559032] UBSAN: array-index-out-of-bounds in net/mac80211/rc80211_minstrel_ht.c:409:33
[  174.559830] index 15 is out of range for type 'minstrel_rate_stats [10]'
[  174.560462] CPU: 7 UID: 0 PID: 213 Comm: kworker/u32:10 Tainted: G        WC OE       6.16.0-rc7+ #6 NONE
[  174.560470] Tainted: [W]=WARN, [C]=CRAP, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[  174.560472] Hardware name: MNT Pocket Reform with RCORE RK3588 Module (DT)
[  174.560474] Workqueue: mt76 mt76u_tx_status_data [mt76_usb]
[  174.560489] Call trace:
[  174.560491]  show_stack+0x34/0x98 (C)
[  174.560501]  dump_stack_lvl+0x60/0x80
[  174.560508]  dump_stack+0x18/0x24
[  174.560514]  ubsan_epilogue+0x10/0x48
[  174.560520]  __ubsan_handle_out_of_bounds+0xa0/0xd0
[  174.560526]  minstrel_ht_tx_status+0x890/0xc68 [mac80211]
[  174.560633]  rate_control_tx_status+0xbc/0x180 [mac80211]
[  174.560730]  ieee80211_tx_status_ext+0x1d8/0x9a0 [mac80211]
[  174.560822]  mt76_tx_status_unlock+0x188/0x2a0 [mt76]
[  174.560844]  mt76x02_send_tx_status+0x130/0x4a0 [mt76x02_lib]
[  174.560860]  mt76x02_tx_status_data+0x64/0xa8 [mt76x02_lib]
[  174.560872]  mt76u_tx_status_data+0x84/0x120 [mt76_usb]
[  174.560879]  process_one_work+0x178/0x3c8
[  174.560885]  worker_thread+0x208/0x400
[  174.560890]  kthread+0x120/0x220
[  174.560894]  ret_from_fork+0x10/0x20
[  174.560898] ---[ end trace ]---
  • "platform" test still fails with
[   88.484072] rk_gmac-dwmac fe1b0000.ethernet end0: Link is Down
[   88.597026] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[   88.598523] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[   94.667723] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[   94.668281] rockchip-dw-pcie a40c00000.pcie: fail to resume
[   94.668783] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[   94.669594] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110
[  120.035426] watchdog: CPU4: Watchdog detected hard LOCKUP on cpu 5
[  120.035978] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm snd_seq_dummy snd_hrtimer snd_seq snd_seq_device dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 mac80211 libarc4 snd_soc_simple_card rockchip_saradc industrialio_triggered_buffer cfg80211 snd_soc_tlv320aic31xx rk805_pwrkey kfifo_buf reform2_lpc(OE) industrialio rockchip_thermal rfkill rockchip_rng hantro_vpu cdc_acm rockchip_rga v4l2_vp9 snd_soc_rockchip_i2s_tdm rockchip_vdec2 panthor videobuf2_dma_sg v4l2_jpeg drm_gpuvm v4l2_h264 drm_exec snd_soc_audio_graph_card snd_soc_simple_card_utils joydev evdev dm_mod nvme_fabrics efi_pstore configfs nfnetlink ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C)
[  120.036066]  videobuf2_dma_contig hid_generic videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi usbhid hid mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 xhci_plat_hcd xhci_hcd onboard_usb_dev snd_soc_hdmi_codec snd_soc_core micrel snd_pcm_dmaengine nvme snd_pcm nvme_core snd_timer snd nvme_keyring nvme_auth soundcore stmmac_platform stmmac pcs_xpcs phylink mdio_devres of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm phy_rockchip_usbdp dw_mmc_rockchip fwnode_mdio ehci_platform typec phy_rockchip_samsung_hdptx phy_rockchip_naneng_combphy rk808_regulator pwm_rockchip dwc3 dw_wdt libphy fan53555 ohci_platform sdhci ehci_hcd ulpi rtc_pcf8523 dw_mmc_pltfm udc_core ohci_hcd dw_mmc cqhci mdio_bus rockchip_dfi rockchipdrm dw_hdmi_qp analogix_dp i2c_rk3x usbcore phy_rockchip_inno_usb2 dw_mipi_dsi dw_mipi_dsi2 usb_common cpufreq_dt drm_dp_aux_bus [last unloaded: mt76x2u]
[  120.036150] Sending NMI from CPU 4 to CPUs 5:
  • The results are similar if I uncomment the unloading of the dwc3 module
set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
#rmmod dwc3
#sleep 2
echo disk >  /sys/power/state
sleep 2
#modprobe dwc3
#sleep 2
modprobe mt76x2u
  • Unsurprisingly, if I try an actual resume (instead of a "platform" test), I get the same messages about "Phy link never came up" and the system needs a hard reboot after trying to resume.

  • Barring inspiration, my next move will be to report my lack of success to the appropriate kernel mailing list(s).

previous episode

01 August, 2025 11:20PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

School of Computing Technical Reports

(You wait ages for an archiving blog post and two come along at once!)

Between 1969-2019, the Newcastle University School of Computing published a Technical Reports Series. Until 2017-ish, the full list of individually-numbered reports was available on the School's website, as well as full text PDFs for every report.

At some time around 2014 I was responsible for migrating the School's website from self-managed to centrally-managed. The driver was to improve the website from the perspective of student recruitment. The TR listings (as well as full listings and texts for awarded PhD theses, MSc dissertations, Director's reports and various others) survived the initial move. After I left (as staff) in 2015, anything not specifically about student recruitment degraded and by 2017 the listings were gone.

I've been trying, on and off, to convince different parts of the University to restore and take ownership of these lists ever since. For one reason or another each avenue I've pursued has gone nowhere.

Recently the last remaining promising way forward failed, so I gave up and did it myself. The list is now hosted by the Historic Computing Committee, here:

https://nuhc.ncl.ac.uk/computing/techreports/

It's not complete (most of the missing entries are towards the end of the run), but it's a start. The approach that finally yielded results was simply scraping the Internet Archive Wayback Machine for various pages from back when the material was represented on the School website, and then filling in the gaps from some other sources.

What I envisage in the future: per-page reports with the relevant metadata (including abstracts); authors de-duplicated and cross-referenced; PDFs OCRd; providing access to the whole metadata DB (probably as as lump of JSON); a mechanism for people to report errors; a platform for students to perform data mining projects: perhaps some kind of classification/tagging by automated content analysis; cross-referencing copies of papers in other venues (lots of TRs are pre-prints).

01 August, 2025 03:55PM

Debian Chronicles

I recently learned that, about 6 months ago, the Debian webteam deleted all news articles from the main website older than 2022. There have been several complaints from people in and outside of Debian, notably Joe Brockmeier of LWN, and this really sad one from the nephew of a deceased developer, wondering where the obituary had gone, but the team have not been swayed and are not prepared to reinstate the news.

It feels very important to me, too, that historic news, and their links, are not broken. So, I hastily built a new Debian service, The Chronicles of Debian, as as permanent home for historic web content.

$ HEAD -S -H "Accept-Language: de" https://www.debian.org/News/1997/19971211
HEAD https://www.debian.org/News/1997/19971211
302 Found
HEAD https://chronicles.debian.org/www/News/1997/19971211
200 OK
…
Content-Language: de
Content-Location: 19971211.de.html
…

This was thrown up in a hurry to get something working as fast as possible, and there is plenty of room for improvement. Get in touch if there's an enhancement you would like or you would like to get involved!

01 August, 2025 12:23PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities July 2025

Another short status update of what happened on my side last month - a lot shorter than usual due to real life events (that will also affect August) but there was some progress on stevia and it landed in Debian too now.

See below for details on the above and more:

phosh

  • Use new, rust based) phrosh portal too (MR)
  • Consistently format meson files (MR)

phoc

  • Add sysprof support (MR)
  • Reject input based on shell's state (MR)
  • Avoid zero serial (MR)
  • Allow to damage whole output on each frame (MR)
  • Avoid possible crash on unlock (MR)

phosh-mobile-settings

  • Use newer gmobile and make CI more flexible (MR
  • Fix nightly build (MR)
  • Allow to configure the OSK's automatic scaling properties (MR)

stevia (formerly phosh-osk-stub)

  • Portrait keyboard scaling (MR)
  • Fix translation of completer descriptions in mobile settings (MR)
  • Use key-pressed events (MR)
  • Fix additional completions like emojis with hunspell completer (MR)
  • Document layout testing (MR)

phosh-vala-plugins

  • Drop vapi files, they've made it into a phosh release now (MR)

xdg-desktop-portal-phosh

  • Bump rustc dependency and simplify CI (MR)

feedbackd-device-themes

  • Add key-{pressed,released} (MR)

livi

  • Make single click play/pause video (MR)
  • Update screenshot and metinfo for better display on flathub (MR)
  • Release 0.3.2 (MR)
  • Update on Flathub (MR)

Debian

  • Upload current stevia to experimental
  • phosh: Don't forget to install vapi files (MR)
  • meta-phosh: Update to 0.48.0: (MR)
  • Update to stevia 0.48 (MR)
  • Update xkbcommon to 0.10.0 (MR)
  • iio-sensor-proxy: Backport buffer mode fixes for trixie (MR), Unblock request
  • livi: Update to 0.3.2 (MR)

foliate

  • Don't let session go idle when in fullscreen (MR)

Cellbroadcastd

  • Fix packaging build (MR)

git-buildpackage

  • pull: Allow to convert local repo when remote got switched to DEP-14 (MR)

wayland-protocols

  • Respin cutout protocol MR

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh-mobile-settings: Disable Xwayland to help e.g. distrobox (MR) - merged
  • phosh-mobile-settings: Allow to search (MR) - merged
  • phosh-mobile-settings: Allow to configure terminal layout shortcuts (MR) - merged
  • feedbackd: Legacy led support (MR) - merged
  • phosh: upcoming-events: Allow to hide days without events (MR)
  • m-b-p-i: Add emergency numbers for JP (MR)
  • xdg-desktop-portal-phosh: bootstrap pure rust portal (MR) - merged
  • xdg-desktop-portal-phosh: portal avoidance (MR) - merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 August, 2025 10:37AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

What a surprise it's August already.

What a surprise it's August already.

01 August, 2025 07:19AM by Junichi Uekawa

Birger Schacht

Status update, July 2025

In beginning of July I got my 12" framework laptop and installed Debian on it. During that setup I made some updates to my base setup scripts that I use to install Debian machines.

Due to the freeze I did not do much package related work. But I was at DebConf and I uploaded a new release of labwc to experimental, mostly to test the tag2upload workflow.

I started working on packaging wlr-sunclock which is a small Wayland widget that displays the sun’s shadows on the earth. I also created an ITP for wayback. Wayback is an X11 compatibility layer to allow to run X11 desktop environments using Wayland.

In my dayjob I did my usual work on apis-core-rdf, which is our Django application for managing prosopographic data. I implemented a password change interface and did some restructuring of the templates. We released a new version which was followed by a bugfix release a couple of days later.

I also implemented a rather big refactoring in pfp-api. PFP-API is a FastAPI based REST API that uses rdfproxy to fetch data from a Triplestore, converts the data to Pydantic models and then ships the models as JSON. Most of the work is done by rdfproxy in the background, but I adapted the existing pfp-api code to make it easier to add new entity types.

01 August, 2025 05:28AM

Paul Wise

FLOSS Activities July 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Sponsors

All work was done on a volunteer basis.

01 August, 2025 02:24AM

Iustin Pop

Our Grand Japan 2025 vacation is over 😭

As I’m writing this, we’re one hour away from landing, and thus our Grand (with a capital G for sure) Japan 2025 vacation is over. Planning started about nine months ago, plane tickets bought six months in advance, most hotels booked about four months ahead, and then a wonderful, even if a bit packed, almost 3 weeks in Japan. And now we’re left with lots of good memories, some mishaps that we’re going to laugh about in a few months’s time, and quite a few thousand pictures to process and filter, so that so they can be viewed in a single session.

Oh, and I’m also left with a nice bottle of plum wine, thanks to inflight shopping. Was planning to, but didn’t manage to buy one in the airport, as Haneda International departures, after the security check, is a bit small. But in 15 hours of flying, there was enough time to implement 2 tiny Corydalis features, and browse the shopping catalog �. I only learned on the flight that some items need to be preordered, a lesson for next time…

Thanks to the wonders of inflight internet, I can write and publish this, but it not being StarLink, Visual Studio Code managed to download an update for the UI, but now the remote server package is too big? slow? and can’t be downloaded. Well, it started download 5 times, and aborted at about 80% each time. Well, thankful my blog is lightweight and I can write it in vi and push it �. And pushing the above-mentioned features to GitHub was also possible.

A proper blog post will follow, once I can select some pictures and manage to condense three weeks in an overall summary… And in the meantime, back to the real world!

01 August, 2025 12:00AM

July 31, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Secure boot certificate rollover is real but probably won't hurt you

LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

comment count unavailable comments

31 July, 2025 04:12PM

Simon Josefsson

Independently Reproducible Git Bundles

The gnulib project publish a git bundle as a stable archival copy of the gnulib git repository once in a while.

Why? We don’t know exactly what this may be useful for, but I’m promoting for this to see if we can establish some good use-case.

A git bundle may help to establish provinence in case of an attack on the Savannah hosting platform that compromise the gnulib git repository.

Another use is in the Debian gnulib package: that gnulib bundle is git cloned when building some Debian packages, to get to exactly the gnulib commit used by each upstream project – see my talk on gnulib at Debconf24 – and this approach reduces the amount of vendored code that is part of Debian’s source code, which is relevant to mitigate XZ-style attacks.

The first time we published the bundle, I wanted it to be possible to re-create it bit-by-bit identically by others.

At the time I discovered a well-written blog post by Paul Beacher on reproducible git bundles and thought he had solved the problem for me. Essentially it boils down to disable threading during compression when producing the bundle, and his final example show this results in a predictable bit-by-bit identical output:

$ for i in $(seq 1 100); do \
> git -c 'pack.threads=1' bundle create -q /tmp/bundle-$i --all; \
> done
$ md5sum /tmp/bundle-* | cut -f 1 -d ' ' | uniq -c
    100 4898971d4d3b8ddd59022d28c467ffca

So what remains to be said about this? It seems reproducability goes deeper than that. One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine.

It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occured even when nothing had been committed on the server side between the two runs.

I thought the reason had to do with other sources of unpredictable data, and I explored several ways to work around this but eventually gave up. I settled for the following sequence of commands:

REV=ac9dd0041307b1d3a68d26bf73567aa61222df54 # master branch commit to package
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
# inspect that the new tree matches a trusted copy
git checkout -B master $REV # put $REV at master
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any commits after $REV
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

At the time it felt more important to publish something than to reach for perfection, so we did so using the above snippet. Afterwards I reached out to the git community on this and there were good discussion about my challenge.

At the end of that thread you see that I was finally able to reproduce a bit-by-bit identical bundles from two different clones, by using an intermediate git -c pack.threads=1 repack -adF step. I now assume that the unpredictable data I got earlier was introduced during the ‘git clone’ steps, compressing the pack differently each time due to threaded compression. The outcome could also depend on what content the server provided, so if someone ran git gc, git repack on the server side things would change for the user, even if the user forced threading to 1 during cloning — more experiments on what kind of server-side alterations results in client-side differences would be good research.

A couple of months passed and it is now time to publish another gnulib bundle – somewhat paired to the bi-yearly stable gnulib branches – so let’s walk through the commands and explain what they do. First clone the repository:

REV=225973a89f50c2b494ad947399425182dd42618c   # master branch commit to package
S1REV=475dd38289d33270d0080085084bf687ad77c74d # stable-202501 branch commit
S2REV=e8cc0791e6bb0814cf4e88395c06d5e06655d8b5 # stable-202507 branch commit
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input

I believe the git fsck will validate that the chain of SHA1 commits are linked together, preventing someone from smuggling in unrelated commits earlier in the history without having to do SHA1 collision. SHA1 collisions are economically feasible today, so this isn’t much of a guarantee of anything though.

git checkout -B master $REV # put $REV at master
# Add all stable-* branches locally:
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git checkout -B stable-202501 $S1REV
git checkout -B stable-202507 $S2REV
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any unrelated commits, not clear this helps

This establish a set of branches pinned to particular commits. The older stable-* branches are no longer updated, so they shouldn’t be moving targets. In case they are modified in the future, the particular commit we used will be found in the official git bundle.

time git -c pack.threads=1 repack -adF

That’s the new magic command to repack and recompress things in a hopefully more predictable way. This leads to a 72MB git pack under .git/objects/pack/ and a 62MB git bundle. The runtime on my laptop is around 5 minutes.

I experimented with -c pack.compression=1 and -c pack.compression=9 but the size was roughly the same; 76MB and 66MB for level 1 and 72MB and 62MB for level 9. Runtime still around 5 minutes.

Git uses zlib by default, which isn’t the most optimal compression around. I tried -c pack.compression=0 and got a 163MB git pack and a 153MB git bundle. The runtime is still around 5 minutes, indicating that compression is not the bottleneck for the git repack command.

That 153MB uncompressed git bundle compresses to 48MB with gzip default settings and 46MB with gzip -9; to 39MB with zst defaults and 34MB with zst -9; and to 28MB using xz defaults with a small 26MB using xz -9.

Still the inconvenience of having to uncompress a 30-40MB archive into
the much larger 153MB is probably not worth the savings compared to
shipping and using the (still relatively modest) 62MB git bundle.

Now finally prepare the bundle and ship it:

git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

Yay! Another gnulib git bundle snapshot is available from
https://ftp.gnu.org/gnu/gnulib/.

The essential part of the git repack command is the -F parameter. In the thread -f was suggested, which translates into the git pack-objects --no-reuse-delta parameter:

--no-reuse-delta

When creating a packed archive in a repository that has existing packs, the command reuses existing deltas. This sometimes results in a slightly suboptimal pack. This flag tells the command not to reuse existing deltas but compute them from scratch.

When reading the man page, I though that using -F which translates into --no-reuse-object would be slightly stronger:

--no-reuse-object

This flag tells the command not to reuse existing object data at all, including non deltified object, forcing recompression of everything. This implies --no-reuse-delta. Useful only in the obscure case where wholesale enforcement of a different compression level on the packed data is desired.

On the surface, without --no-reuse-objects, some amount of earlier compression could taint the final result. Still, I was able to get bit-by-bit identical bundles by using -f so possibly reaching for -F is not necessary.

All the commands were done using git 2.51.0 as packaged by Guix. I fear the result may be different with other git versions and/or zlib libraries. I was able to reproduce the same bundle on a Trisquel 12 aramo (derived from Ubuntu 22.04) machine, which uses git 2.34.1. This suggests there is some chances of this being possible to reproduce in 20 years time. Time will tell.

I also fear these commands may be insufficient if something is moving on the server-side of the git repository of gnulib (even just something simple as a new commit), I tried to make some experiments with this but let’s aim for incremental progress here. At least I have now been able to reproduce the same bundle on different machines, which wasn’t the case last time.

Happy Reproducible Git Bundle Hacking!

31 July, 2025 02:50PM by simon

Russell Coker

July 30, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Cordell Bloor (cgmb)
  • Enkelena Haxhija (enkelenah)

The following contributors were added as Debian Maintainers in the last two months:

  • Karsten Schöke
  • Lorenzo Puliti
  • Nick Rosbrook
  • Nicolas Peugnet
  • Yifei Zhan
  • Glenn Strauss
  • Fab Stz
  • Matheus Polkorny
  • Manuel Elias Guerra Figueroa

Congratulations!

30 July, 2025 12:00PM by Jean-Pierre Giraud

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Superimposed codes, take three

After I wrote last week that OEIS A286874 would stop at a(12) and that computing (verifying) a(13) would take about 4–5000 CPU years, the changes have finally been approved, and… the sequence includes a(13) = 26. What happened?

Well, first of all, I am indeed not a mathematical genius (the last post even forgot the “not”); I had a stupid conversion error in the estimation, causing a factor 25 or so. But the rest came from actual speedups.

First of all, I improved one of the existing symmetry detectors a bit (the one described last in the previous post was not fully rejecting the possible symmetries when multiple new bits were introduced in one value). But I also made a more universal symmetry detector; if switching the order of certain neighboring bits and re-sorting the sequence made it lexicographically smaller, then we can abort the search. This is pretty expensive and only rejects ~5% of candidates, so it's only worth it at higher levels, but it's much cheaper than checking all n! arbitrary permutations and catches maybe 90% of a full rejection. (Also, if you can reject 5% at multiple levels, those percentages tend to add up. We're down from hundreds of thousands of duplicate solutions, to only a bit over 100, so the amount of speedup available from reducing symmetries is rapidly dwindling.)

Also, surprisingly to me, before going on to run the next level, doing a population count to check if there were too few bits to ever be a solution was seemingly a large win (e.g. are have three values so far, but only 21 bits left; we can never generate a sequence larger than 24 even if all the stars align, and can abort immediately). You would think that this counting, which takes very real CPU time even with vectorization, wouldn't be worth it compared to just running through the base layers of the recursion very quickly, but evidently, it is by a large margin. I guess it's a very common case to have many more than 1 bit left but less than 26-n, and it also means you can just stop iterating a bit before you get to the end.

But perhaps the most impactful optimization was a microoptimization. Recall that we spent most of our time ANDing 8192-bit vectors (which would be 16384-bit vectors for a(13)) with each other. Some looking at performance metrics suggested that the RAM bandwidth was completely maxed out, with ~80% of theoretical bandwidth in use; only faster RAM or more memory channels would have made a reasonable dent in the performance of this kind of architecture.

But pretty early, most of those bits will be zero. If you've already decided on the first five values in a sequence, you will not have 8187 options left; in most cases, you'll have more like 3–400. And since the bit sets only ever shrink, we can simply compress away all those known zeros. For most of our purposes, it doesn't really decide what each bit signifies (an important exception is the point where we have a valid solution and need to print it out, but it's not hard to store the mapping), as we mostly use the values for looking up pregenerated vectors to AND together. This means that when we start a new sub-job, we can find which future values are possible, and then map those into new numbers 0 through 511 (or whatever). This means we can use 512-bit vectors instead of 8192-bit vectors, with all the obvious advantages; less ALU work, less memory traffic, and better cache locality. (It's interesting that we started by being extremely ALU-bound, then moved to being very RAM-bound, and then ended up in fairly normal territory.)

Of course, there are situations where you could have more than 512 valid values. In that case, you can either recompile with larger bit sets (typically a multiple of 128, to get good use of SIMD), or you can split into smaller sub-jobs; find all valid ways of extending the sequence by one element (trivial; we already did that to make the bit sets), and then make one job for each. This splitting is also good for variance; no longer do you have some sub-jobs that finish in milliseconds and some that require days.

There are some downsides too, of course. In particular, we can no longer pregenerate one universal 8192*8192*8192 bit LUT (well, 8192*8191/2*8192); every sub-job needs to make its own set of LUTs before starting. But since this is O(n³) and we just cut n from 8192 to 512, it's not really a blocker (although of course far from zero); and importantly, it cuts our total RAM usage. For n=8192, we already needed a bit over 32 GB (though sharable between all jobs), and each next element in the sequence (a(13), a(14), etc.) is a factor 8 extra, so it starts becoming a real problem fast. But on the flipside, I think this extra precalc makes the algorithm much less amenable to a theoretical GPU implementation (~8 MB private data per instance, as opposed to one large shared static pool of constants and then just 1 kB of state per instance), which would otherwise be nontrivial but probably possible (the problem itself is so parallel). Interestingly enough, it's possible to use bitslicing to speed up this precalc, which is a technique I cannot remember when I last used.

All in all, it took only about 114 CPU-days (or, well, thread-days, as hyperthreading now makes sense again) to calculate a(13), which was eminently possible; and many of the optimizations came late in the process, so a rerun would be faster than that. So, could we get to a(14)? Well, maybe. I'm less convinced that it would be impossible than I was with a(13) earlier. :-) But I started looking at it, and it turns out there are literally trillions (possibly more!) of sub-jobs if you want to split deeply enough to get each down into the 512-bit range. And even at ~8 ms per core per job (ignoring the cost of splitting and just looking at the cost of processing the jobs themselves), it just becomes too unwieldy for me, especially since Postgres isn't really that great at storing billions of rows efficiently. But impossible? Definitely not.

30 July, 2025 08:32AM

July 28, 2025

Dimitri John Ledkov

Achieving actually full disk encryption of UEFI ESP at rest with TCG OPAL, FIPS, LUKS

Achieving full disk encryption using FIPS, TCG OPAL and LUKS to encrypt UEFI ESP on bare-metal and in VMs

Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.

Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.

Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.

TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.

Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.

What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.

Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.

Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.

What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.

If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.

What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).

All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).

If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.

I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest. 

Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.

28 July, 2025 11:13AM by Dimitri John Ledkov (noreply@blogger.com)

Russ Allbery

Review: Cyteen

Review: Cyteen, by C.J. Cherryh

Series: Cyteen #1
Publisher: Warner Aspect
Copyright: 1988
Printing: September 1995
ISBN: 0-446-67127-4
Format: Trade paperback
Pages: 680

The main text below is an edited version of my original review of Cyteen written on 2012-01-03. Additional comments from my re-read are after the original review.

I've reviewed several other C.J. Cherryh books somewhat negatively, which might give the impression I'm not a fan. That is an artifact of when I started reviewing. I first discovered Cherryh with Cyteen some 20 years ago, and it remains one of my favorite SF novels of all time. After finishing my reading for 2011, I was casting about for what to start next, saw Cyteen on my parents' shelves, and decided it was past time for my third reading, particularly given the recent release of a sequel, Regenesis.

Cyteen is set in Cherryh's Alliance-Union universe following the Company Wars. It references several other books in that universe, most notably Forty Thousand in Gehenna but also Downbelow Station and others. It also has mentions of the Compact Space series (The Pride of Chanur and sequels). More generally, almost all of Cherryh's writing is loosely tied together by an overarching future history. One does not need to read any of those other books before reading Cyteen; this book will fill you in on all of the politics and history you need to know. I read Cyteen first and have never felt the lack.

Cyteen was at one time split into three books for publishing reasons: The Betrayal, The Rebirth, and The Vindication. This is an awful way to think of the book. There are no internal pauses or reasonable volume breaks; Cyteen is a single coherent novel, and Cherryh has requested that it never be broken up that way again. If you happen to find all three portions as your reading copy, they contain all the same words and are serviceable if you remember it's a single novel under three covers, but I recommend against reading the portions in isolation.

Human colonization of the galaxy started with slower-than-light travel sponsored by the private Sol Corporation. The inhabitants of the far-flung stations and the crews of the merchant ships that supplied them have formed their own separate cultures, but initially remained attached to Earth. That changed with the discovery of FTL travel and a botched attempt by Earth to reassert its authority. At the time of Cyteen, there are three human powers: distant Earth (which plays little role in this book), the merchanter Alliance, and Union.

The planet Cyteen is one of only a few Earth-like worlds discovered by human expansion, and is the seat of government and the most powerful force in Union. This is primarily because of Reseune: the Cyteen lab that produces the azi.

If Cyteen is about any one thing, it's about azi: genetically engineered human clones who are programmed via intensive psychological conditioning starting before birth. The conditioning uses a combination of drugs to make them receptive and "tape," specific patterns of instruction and sensory stimulation. They are designed for specific jobs or roles, they're conditioned to be obedient to regular humans, and they're not citizens. They are, in short, slaves.

In a lot of books, that's as deep as the analysis would go. Azi are slaves, and slavery is certainly bad, so there would probably be a plot around azi overthrowing their conditioning, or around the protagonists trying to free them from servitude. But Cyteen is not any SF novel, and azi are considerably more complex and difficult than that analysis. We learn over the course of the book that the immensely powerful head of Reseune Labs, Ariane Emory, has a specific broader purpose in mind for the azi. One of the reasons why Reseune fought for and gained the role of legal protector of all azi in Union, regardless of where they were birthed, is so that Reseune could act to break any permanent dependence on azi as labor. And yet, they are slaves; one of the protagonists of Cyteen is an experimental azi, which makes him the permanent property of Reseune and puts him in constant jeopardy of being used as a political prisoner and lever of manipulation against those who care about him.

Cyteen is a book about manipulation, about programming people, about what it means to have power over someone else's thoughts, and what one can do with that power. But it's also a book about connection and identity, about what makes up a personality, about what constitutes identity and how people construct the moral codes and values that they hold at their core. It's also a book about certainty. Azi are absolutely certain, and are capable of absolute trust, because that's part of their conditioning. Naturally-raised humans are not. This means humans can do things that azi can't, but the reverse is also true. The azi are not mindless slaves, nor are they mindlessly programmed, and several of the characters, both human and azi, find a lot of appeal in the core of certainty and deep self-knowledge of their own psychological rules that azis can have. Cyteen is a book about emotions, and logic, and where they come from and how to balance them. About whether emotional pain and uncertainty is beneficial or damaging, and about how one's experiences make up and alter one's identity.

This is also a book about politics, both institutional and personal. It opens with Ariane Emory, Councilor for Science for five decades and the head of the ruling Union Expansionist party. She's powerful, brilliant, dangerously good at reading people, and dangerously willing to manipulate and control people for her own ends. What she wants, at the start of the book, is to completely clone a Special (the legal status given to the most brilliant minds of Union). This was attempted before and failed, but Ariane believes it's now possible, with a combination of tape, genetic engineering, and environmental control, to reproduce the brilliance of the original mind. To give Union another lifespan of work by their most brilliant thinkers.

Jordan Warrick, another scientist at Reseune, has had a long-standing professional and personal feud with Ariane Emory. As the book opens, he is fighting to be transferred out from under her to the new research station that would be part of the Special cloning project, and he wants to bring his son Justin and Justin's companion azi Grant with them. Justin is a PR, a parental replicate, meaning he shares Jordan's genetic makeup but was not an attempt to reproduce the conditions of Jordan's rearing. Grant was raised as his brother. And both have, for reasons that are initially unclear, attracted the attention of Ariane, who may be using them as pawns.

This is just the initial setup, and along with this should come a warning: the first 150 pages set up a very complex and dangerous political situation and build the tension that will carry the rest of the book, and they do this by, largely, torturing Justin and Grant. The viewpoint jumps around, but Justin and Grant are the primary protagonists for this first section of the book. While one feels sympathy for both of them, I have never, in my multiple readings of the book, particularly liked them. They're hard to like, as opposed to pity, during this setup; they have very little agency, are in way over their heads, are constantly making mistakes, and are essentially having their lives destroyed.

Don't let this turn you off on the rest of the book. Cyteen takes a dramatic shift about 150 pages in. A new set of protagonists are introduced who are some of the most interesting, complex, and delightful protagonists in any SF novel I have read, and who are very much worth waiting for. While Justin has his moments later on (his life is so hard that his courage can be profoundly moving), it's not necessary to like him to love this book. That's one of the reasons why I so strongly dislike breaking it into three sections; that first section, which is mostly Justin and Grant, is not representative of the book.

I can't talk too much more about the plot without risking spoiling it, but it's a beautiful, taut, and complex story that is full of my favorite things in both settings and protagonists. Cyteen is a book about brilliant people who think on their feet. Cherryh succeeds at showing this through what they do, which is rarely done as well as it is here. It's a book about remembering one's friends and remembering one's enemies, and waiting for the most effective moment to act, but it also achieves some remarkable transformations. About 150 pages in, you are likely to loathe almost everyone in Reseune; by the end of the book, you find yourself liking, or at least understanding, nearly everyone. This is extremely hard, and Cherryh pulls it off in most cases without even giving the people she's redeeming their own viewpoint sections. Other than perhaps George R.R. Martin I've not seen another author do this as well.

And, more than anything else, Cyteen is a book with the most wonderful feeling of catharsis. I think this is one of the reasons why I adore this book and have difficulties with some of Cherryh's other works. She's always good at ramping up the tension and putting her characters in awful, untenable positions. Less frequently does she provide the emotional payoff of turning the tables, where you get to watch a protagonist do everything you've been wanting them to do for hundreds of pages, except even better and more delightfully than you would have come up with. Cyteen is one of the most emotionally satisfying books I've ever read.

I could go on and on; there is just so much here that I love. Deep questions of ethics and self-control, presented in a way that one can see the consequences of both bad decisions and good ones and contrast them. Some of the best political negotiations in fiction. A wonderful look at friendship and loyalty from several directions. Two of the best semi-human protagonists I've seen, who one can see simultaneously as both wonderful friends and utterly non-human and who put nearly all of the androids in fiction to shame by being something trickier and more complex. A wonderful unfolding sense of power. A computer that can somewhat anticipate problems and somewhat can't, and that encapsulates much of what I love about semi-intelligent bases in science fiction. Cyteen has that rarest of properties of SF novels: Both the characters and the technology meld in a wonderful combination where neither could exist without the other, where the character issues are illuminated by the technology and the technology supports the characters.

I have, for this book, two warnings. The first, as previously mentioned, is that the first 150 pages of setup is necessary but painful to read, and I never fully warmed to Justin and Grant throughout. I would not be surprised to hear that someone started this book but gave up on it after 50 or 100 pages. I do think it's worth sticking out the rocky beginning, though. Justin and Grant continue to be a little annoying, but there's so much other good stuff going on that it doesn't matter.

The other warning is that part of the setup of the story involves the rape of an underage character. This is mostly off-camera, but the emotional consequences are significant (as they should be) and are frequently discussed throughout the book. There is also rather frank discussion of adolescent sexuality later in the book. I think both of these are relevant to the story and handled in a way that isn't gratuitous, but they made me uncomfortable and I don't have any past history with those topics.

Those warnings notwithstanding, this is simply one of the best SF novels ever written. It uses technology to pose deep questions about human emotions, identity, and interactions, and it uses complex and interesting characters to take a close look at the impact of technology on lives. And it does this with a wonderfully taut, complicated plot that sustains its tension through all 680 pages, and with characters whom I absolutely love. I have no doubt that I'll be reading it for a fourth and fifth time some years down the road.

Followed by Regenesis, although Cyteen stands well entirely on its own and there's no pressing need to read the sequel.

Rating: 10 out of 10


Some additional thoughts after re-reading Cyteen in 2025:

I touched on this briefly in my original review, but I was really struck during this re-read how much the azi are a commentary on and a complication of the role of androids in earlier science fiction. Asimov's Three Laws of Robotics were an attempt to control the risks of robots, but can also be read as turning robots into slaves. Azis make the slavery more explicit and disturbing by running the programming on a human biological platform, but they're more explicitly programmed and artificial than a lot of science fiction androids.

Artificial beings and their relationship to humans have been a recurring theme of SF since Frankenstein, but I can't remember a novel that makes the comparison to humans this ambiguous and conflicted. The azi not only like being azi, they can describe why they prefer it. It's clear that Union made azi for many of the same reasons that humans enslave other humans, and that Ariane Emory is using them as machinery in a larger (and highly ethically questionable) plan, but Cherryh gets deeper into the emergent social complications and societal impact than most SF novels manage. Azi are apparently closer to humans than the famous SF examples such as Commander Data, but the deep differences are both more subtle and more profound.

I've seen some reviewers who are disturbed by the lack of a clear moral stance by the protagonists against the creation of azi. I'm not sure what to think about that. It's clear the characters mostly like the society they've created, and the groups attempting to "free" azi from their "captivity" are portrayed as idiots who have no understanding of azi psychology. Emory says she doesn't want azi to be a permanent aspect of society but clearly has no intention of ending production any time soon. The book does seem oddly unaware that the production of azi is unethical per se and, unlike androids, has an obvious exit ramp: Continue cloning gene lines as needed to maintain a sufficient population for a growing industrial civilization, but raise the children as children rather than using azi programming. If Cherryh included some reason why that was infeasible, I didn't see it, and I don't think the characters directly confronted it.

I don't think societies in books need to be ethical, or that Cherryh intended to defend this one. There are a lot of nasty moral traps that civilizations can fall into that make for interesting stories. But the lack of acknowledgment of the problem within the novel did seem odd this time around.

The other part of this novel that was harder to read past in this re-read is the sexual ethics. There's a lot of adolescent sexuality in this book, and even apart from the rape scene — which was more on-the-page than I had remembered and which is quite (intentionally) disturbing — there is a whole lot of somewhat dubious consent. Maybe I've gotten older or just more discriminating, but it felt weirdly voyeuristic to know this much about the sex lives of characters who are, at several critical points in the story, just a bunch of kids.

All that being said, and with the repeated warning that the first 150 pages of this novel are just not very good, there is still something magic about the last two-thirds of this book. It has competence porn featuring a precociously brilliant teenager who I really like, it has one of the more interesting non-AI programmed computer systems that I've read in SF, it has satisfying politics that feel like modern politics (media strategy and relationships and negotiated alliances, rather than brute force and ideology), and it has a truly excellent feeling of catharsis. The plot resolution is a bit too abrupt and a bit insufficiently explained (there's more in Regenesis), but even though this was my fourth time through this book, the pacing grabbed me again and I could barely put down the last part of the story.

Ethics aside (and I realize that's quite the way to start a sentence), I find the azi stuff fascinating. I know the psychology in this book is not real and is hopelessly simplified compared to real psychology, but there's something in the discussions of value sets and flux and self-knowledge that grabs my interest and makes me want to ponder. I think it's the illusion of simplicity and control, the what-if premise of thought where core motivations and moral rules could be knowable instead of endlessly fluid the way they are in us humans. Cherryh's azi are some of the most intriguing androids in science fiction to me precisely because they don't start with computers and add the humanity in, but instead start with humanity and overlay a computer-like certainty of purpose that's fully self-aware. The result is more subtle and interesting than anything Star Trek managed.

I was not quite as enamored with this book this time around, but it's still excellent once the story gets properly started. I still would recommend it, but I might add more warnings about the disturbing parts.

Re-read rating: 9 out of 10

28 July, 2025 03:53AM

July 27, 2025

Review: The Dragon's Banker

Review: The Dragon's Banker, by Scott Warren

Publisher: Scott Warren
Copyright: September 2019
ISBN: 0-578-55292-2
Format: Kindle
Pages: 263

The Dragon's Banker is a self-published stand-alone fantasy novel, set in a secondary world with roughly Renaissance levels of technology and primarily alchemical magic. The version I read includes an unrelated novelette, "Forego Quest." I have the vague impression that this novel shares a world with other fantasy novels by the same author, but I have not read them and never felt like I was missing something important.

Sailor Kelstern is a merchant banker. He earns his livelihood by financing caravans and sea voyages and taking a cut of the profits. He is not part of the primary banking houses of the city; instead, he has a small, personal business with a loyal staff that looks for opportunities the larger houses may have overlooked. As the story opens, he has fallen on hard times due in part to a spectacular falling-out with a previous client and is in desperate need of new opportunities. The jewel-bedecked Lady Arkelai and her quest for private banking services for her father, Lord Alkazarian, may be exactly what he needs. Or it may be a dangerous trap; Sailor has had disastrous past experience with nobles attempting to strong-arm him into their service.

Unbeknownst to Sailor, Lord Alkazarian is even more dangerous than he first appears. He is sitting on a vast hoard of traditional riches whose value is endangered by the rise of new-fangled paper money. He is not at all happy about this development. He is also a dragon.

I, and probably many other people who read this book, picked it up because it was recommended by Matt Levine as a fantasy about finance instead of the normal magical adventuring. I knew it was self-published going in, so I wasn't expecting polished writing. My hope was for interesting finance problems in a fantasy context, similar to the kind of things Matt Levine's newsletter is about: schemes for financing risky voyages, complications around competing ideas of money, macroeconomic risks from dragon hoards, complex derivatives, principal-agent problems, or something similar that goes beyond the (annoyingly superficial) treatment of finance in most fantasy novels.

Unfortunately, what I got was a rather standard fantasy setting and a plot that revolves mostly around creative uses for magical devices, some conventional political skulduggery, and a lot of energetic but rather superficial business hustling. The protagonist is indeed a merchant banker who is in no way a conventional fantasy hero (one of the most taxing parts of Sailor's occasional visits to the dragon is the long hike down to the hoard, or rather the long climb back out), but the most complex financial instrument that appears in this book is straightforward short-selling. Alas. I was looking forward to the book that I hoped this was.

Given my expectations, this was a disappointment. I kept waiting for the finances to get more complicated and interesting, and that kept not happening. Without that expectation, this is... okay, I guess. The writing is adequate but kind of stilted, presumably in an effort to make it sound slightly archaic, and has a strong self-published feel. Sailor is not a bad protagonist, but neither is he all that memorable. I did like some of the world-building, which has an attention to creative uses of bits of magic that readers who like gadget fantasy may appreciate. There are a lot of plot conveniences and coincidences, though, and very little of this is going to feel original to a long-time fantasy reader.

Putting some of the complexity of real Renaissance banking and finance systems into a fantasy world is a great idea, but I've yet to read one that lived up to the potential of the premise. (Neal Stephenson's Baroque Cycle comes the closest; unfortunately, the non-economic parts of that over-long series are full of Stephenson's worst writing habits.) Part of the problem is doubtless that I am reasonably well-read in economics, so my standards are high. Maybe the average reader would be content with a few bits on the perils of investment, a simple treatment of trust in currency, and a mention or two of short-selling, which is what you get in this book.

I am not altogether sorry that I read this, but I wouldn't recommend it. I encourage Matt Levine to read more genre fiction and find some novels with more interesting financial problems!

"Forego Quest": This included novelette, on the other hand, was surprisingly good and raised my overall rating for the book by a full point.

Arturus Kingson is the Chosen One. He is not the Chosen One of a single prophecy or set of prophecies; no, he's the Chosen One of, apparently, all of them, no matter how contradictory, and he wants absolutely nothing to do with any of them. Magical swords litter his path. He has so many scars and birthmarks that they look like a skin condition. Beautiful women approach him in bars. Mysterious cloaked strangers die dramatically in front of him. Owls try to get into his bedroom window. It's all very exhausting, since the universe absolutely refuses to take no for an answer.

There isn't much more to the story than this, but Warren writes it in the first person with just the right tone of exasperated annoyance and gives Arturus a real problem to solve and enough of a plot to provide some structure. I'm usually not a fan of parody stories because too many of them feel like juvenile slapstick. This one is sarcastic instead, which is much more to my taste.

"Forego Quest" goes on perhaps a bit too long, and the ending was not as successful as the rest of the book, but this was a lot of fun and made me laugh. (7)

Rating: 6 out of 10

27 July, 2025 03:47AM

July 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 closes in Brest and DebConf26 announced

DebConf25 group photo - click to enlarge

On Saturday 19 July 2025, the annual Debian Developers and Contributors Conference came to a close.

Over 443 attendees representing 50 countries from around the world came together for a combined 169 events (including some which took place during the DebCamp) including more than 50 Talks, 39 Short Talks, 5 Discussions, 59 Birds of a Feather sessions ("BoF" – informal meeting between developers and users), 10 workshops, and activities in support of furthering our distribution and free software, learning from our mentors and peers, building our community, and having a bit of fun.

The conference was preceded by the annual DebCamp hacking session held 7 through 13 July where Debian Developers and Contributors convened to focus on their individual Debian-related projects or work in team sprints geared toward in-person collaboration in developing Debian.

This year, a session was dedicated to prepare the BoF "Dealing with Dormant Packages: Ensuring Debian's High Standards"; another, at the initiative of our DPL, to prepare suggestions for the BoF “Package Acceptance in Debian: Challenges and Opportunities"; and an afternoon around Salsa-CI.

As has been the case for several years, a special effort has been made to welcome newcomers and help them become familiar with Debian and DebConf by organizing a sprint "New Contributors Onboarding" every day of Debcamp, followed more informally by mentorship during DebConf.

The actual Debian Developers Conference started on Monday 14 July 2025.

In addition to the traditional "Bits from the DPL" talk, the continuous key-signing party, lightning talks, and the announcement of next year's DebConf26, there were several update sessions shared by internal projects and teams.

Many of the hosted discussion sessions were presented by our technical core teams with the usual and useful "Meet the Technical Committee", the "What's New in the Linux Kernel" session, and a set of BoFs about Debian packaging policy and Debian infrastructure. Thus, more than a quarter of the discussions dealt with this theme, including talks about our tools and Debian's archive processes. Internationalization and Localization have been the subject of several talks. The Python, Perl, Ruby, Go, and Rust programming language teams also shared updates on their work and efforts. Several talks have covered Debian Blends and Debian-derived distributions and other talks addressed the issue of Debian and AI.

More than 17 BoFs and talks about community, diversity, and local outreach highlighted the work of various teams involved in not just the technical but also the social aspect of our community; four women who have made contributions to Debian through their artwork in recent years presented their work.

The one-day session "DebConf 2025 Academic Track!", organized in collaboration with the IRISA laboratory was the first session welcoming fellow academics at DebConf, bringing together around ten presentations.

The schedule was updated each day with planned and ad hoc activities introduced by attendees over the course of the conference. Several traditional activities took place: a job fair, a poetry performance, the traditional Cheese and Wine party (this year with cider as well), the Group Photos, and the Day Trips.

For those who were not able to attend, most of the talks and sessions were broadcasted live and recorded; currently the videos are made available through this link.

Almost all of the sessions facilitated remote participation via IRC and Matrix messaging apps or online collaborative text documents which allowed remote attendees to "be in the room" to ask questions or share comments with the speaker or assembled audience.

DebConf25 saw over 441 T-shirts, 3 day trips, and up to 315 meals planned per day.

All of these events, activities, conversations, and streams coupled with our love, interest, and participation in Debian and F/OSS certainly made this conference an overall success both here in Brest, France and online around the world.

The DebConf25 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf26 will be held in Santa Fe, Argentina, likely in July. As tradition follows before the next DebConf the local organizers in Argentina will start the conference activities with DebCamp with a particular focus on individual and team work towards improving the distribution.

DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct on the DebConf25 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf25, particularly our Platinum Sponsors: AMD, EDF, Infomaniak, Proxmox, and Viridien.

We also wish to thank our Video and Infrastructure teams, the DebConf25 and DebConf committees, our host nation of France, and each and every person who helped contribute to this event and to Debian overall.

Thank you all for your work in helping Debian continue to be "The Universal Operating System".

See you next year!

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

About AMD

The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution. For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

About EDF

EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment. Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results.

About Infomaniak

Infomaniak is Switzerland's leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.

About Proxmox

Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are built on Debian, we are happy that they give back to the community by sponsoring DebConf25.

About Viridien

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future. Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

Contact Information

For further information, please visit the DebConf25 web page at https://debconf25.debconf.org/ or send mail to press@debian.org.

26 July, 2025 09:50PM by Publicity team

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 8/n

Context

Sidequest: Fix patches continued

  • 1001-pci_dw_rockchip_enable_l0s_capability.patch doesn't apply cleanly either

  • b4 am 1744594051-209255-1-git-send-email-shawn.lin@rock-chips.com

  • This one has a usable blob index 21dc99c

  • git log --raw --all --find-object=21dc99c finds the patch already applied as 198e69cc4150aba1e7af740a2111ace6a267779e

  • 1002-v2-media_verisilicon_fix_av1_decoder_clock_frequency.patch applies cleanly

Build kernel with backported patches

Back following the upstream bisect instructions from reform-debian-packages/README.md

$  apt-get install git gpg gpgv build-essential bc rsync kmod cpio bison flex libelf-dev libssl-dev debhelper libdw-dev
$ cp /boot/config-6.15.4-mnt-reform-arm64 .config
$ make olddefconfig
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

One thing not documented there is that you need the pocket-reform dtb as well. Copy that file from reform-debian-packages, and update the relevant Makefile.

diff --git a/arch/arm64/boot/dts/rockchip/Makefile b/arch/arm64/boot/dts/rockchip/Makefile
index 26533be1dd86..83ef850cd113 100644
--- a/arch/arm64/boot/dts/rockchip/Makefile
+++ b/arch/arm64/boot/dts/rockchip/Makefile
@@ -163,6 +163,7 @@ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-h96-max-v58.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-jaguar.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-jaguar-pre-ict-tester.dtbo
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-mnt-reform2.dtb
+dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-mnt-pocket-reform.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-nanopc-t6.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-nanopc-t6-lts.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-ok3588-c.dtb
diff --git a/arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts b/arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts
new file mode 100644
index 000000000000..81533cedc200
  • With these changes I can boot in to 6.16~rc6, and log in the serial console, but the LCD display seems blank (but with some backlight power). That is probably related to the following warnings from device tree compilation
DTC     arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1020.3-13: Warning (reg_format): /dsi@fde30000/panel:reg: property has invalid length (4 bytes) (#address-cells == 2, #size-cells == 1)
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (pci_device_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (pci_device_bus_num): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (i2c_bus_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (spi_bus_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1018.8-1033.4: Warning (avoid_default_addr_size): /dsi@fde30000/panel: Relying on default #address-cells value
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1018.8-1033.4: Warning (avoid_default_addr_size): /dsi@fde30000/panel: Relying on default #size-cells value
  • The current source is on

    https://salsa.debian.org/bremner/collabora-rockchip-3588

    The branch "reform-patches" is subject to rebase (and may make your computer explode).

  • For now I'm blocked on the panel, I suspect the dts file needs an update.

previous episode|next episode

26 July, 2025 05:15PM

Birger Schacht

My DebConf 25 review

DebConf 25 happened between 14th July and 19th July and I was there. It was my first DebConf (the big one, I was at a Mini DebConf in Hamburg a couple of years ago) and it was interesting. DebConf 25 happened at a Campus University at the outskirts of Brest and I was rather reluctant to go at first (EuroPython 25 was happening at the same time in Prague), but I decided to use the chance of DebConf happening in Europe, reachable by train from Vienna. We took the nighttrain to Paris, then found our way through the maze that is the Paris underground system and then got to Brest with the TGV. On our way to the Conference site we made a detour to a supermarket, which wasn’t that easy because is was a national holiday in France and most of the shops were closed. But we weren’t sure about the food situation at DebConf and we also wanted to get some beer.

At the conference we were greeted by very friendly people at the badge station and the front desk and got our badges, swag and most important the keys to pretty nice rooms on the campus. Our rooms had a small private bathroom with a toilet and a shower and between the two rooms was a shared kitchen with a refrigerator and a microwave. All in all, the accommodation was simple but provided everything we needed and especially a space to have some privacy.

During the next days I watched a lot of talks, met new people, caught up with old friends and also had a nice time with my travel buddies. There was a beach near the campus which I used nearly every day. It was mostly sunny except for the last day of the conference, which apparently was not common for the Brest area, so we got lucky regarding the weather.

Landscape view of the sea at Dellec beach

Given that we only arrived in the evening of the first day of DebConf, I missed the talk When Free Software Communities Unite: Tails, Tor, and the Fight for Privacy (recording), but I watched it on the way home and it was also covered by LWN.

On Tuesday I started the day by visiting a talk about tag2upload (recording). The same day there was also an academic track and I watched the talk titled Integrating Knowledge Graphs into the Debian Ecosystem (recording) which presented a property graph showing relationships between various entities like packages, maintainers or bugs (there is a repository with parts of a paper, but not much other information). The speaker also mentioned the graphcast framework and the ontocast framework which sound interesting - we might have use for something liked this at $dayjob.

In the afternoon there was a talk about the ArchWiki (recording) which gave a comprehensive insight in how the ArchWiki and the community behind it works. Right after that was a Debian Wiki BoF. There are various technical limitations with the current wiki software and there are not enough helping hands to maintain the service and do content curation. But the BoF had some nice results: there is now a new debian-wiki mailinglist, an IRC channel, a MediaWiki installation has been set up during DebConf, there are efforts to migrate the data and most importantly: and handful of people who want to maintain the service and organize the content of the wiki. I think the input from the ArchWiki folks gave some ideas how that team could operate.

Tag at the wall at Dellec beach

Wednesday was the day of the daytrip. I did not sign up for any of the trips and used the time to try out tag2upload, uploaded the latest labwc release to experimental and spent the rest of the day at the beach.

Other noteworthy session I’ve attended were the Don’t fear the TPM talk (recording), which showed me a lot of stuff to try out, the session about lintian-ng (no recording), which is an experimental approach to make lintian faster, the review of the first year of wcurls existence (no recording yet) and the summary of Rust packaging in Debian (no recording yet). In between the sessions I started working on packaging wlr-sunclock (#1109230).

What did not work

Vegan food.

I might be spoiled by other conferences. Both at EuroPycon last year (definitely bigger, a lot more commercial) and at PyCon CZ 23 (similar in size, a lot more DIY) there was catering with explicitly vegan options.

As I’ve mentioned in the beginning, we went to a supermarket before we went to the conference and we had to go there one more time during the conference. I think there was a mixture between a total lack of awareness and a LOT of miscommunication. The breakfasts at the conference consisted of pastries and baguettes - I asked at the first day what the vegan options were and the answer was “I don’t know, maybe the baguette?” and we were asked to only take as much baguette as the people who also got pastries.

The lunch was prepared by the “Restaurant associatif de Kernévent” which is a canteen at the university campus. When we asked if there is vegan food, the people there said that there was only a vegetarian option so we only ate salad. Only later we heard via word of mouth that one has to explicitly ask for a vegan meal which was apparently prepared separatly and you had to find the right person that knows about it (I think thats very Debian-like 😉). But even then a person once got a vegetarian option offered as vegan food.

One problem was also the missing / confusing labeling of the food. At the conference dinner there was apparently vegan food, but it was mixed with all the other food. There were some labels but with hundreds of hungry people around and caterers removing empty plates and dropping off plates with other stuff, everything gets mixed up. In the end we ate bread soaked in olive oil, until the olive oil got taken away by the catering people literally while we were dipping the bread in it.

And when these issues were raised, some of the reactions can be summarized as “You’re holding it wrong” which was really frustrating.

The dinners at the conference hall were similar. At some point I had the impression that “vegan” and “vegetarian” was simply seen as the same thing.

Dinner menu at the conference

If the menus would be written like a debian/copyright file it would probably have looked like this:

Food: *
Diet: Vegan or Vegetarian

But the thing is that Vegan and Vegetarian cannot be mixed. Its similar to non compatible licenses. Once you mix vegan food with vegan food with vegetarian food it’s not vegan anymore.

Don’t get me wrong, I know its hard to organize food for hundreds of people. But if you don’t know what it means to provide a vegan option, just communicate the fact so people can look alternatives in advance. During the week some of the vegan people shared food, which was really nice and there were also a lot of non-vegan people who tried to help, organized extra food or simply listened to the hangry rants. Thanks for that!

Paris

Saturday was the last day of DebConf and it was a rainy day. On Sunday morning we took the TGV back to Paris and then stayed there for one night because the next night train back to Vienna was on Monday. Luckily the weather was better in Paris. The first thing we did was to look up a vegan burger place. In the evening we strolled along the Seine and had a couple of beers at the Jardins du Trocadéro. Monday the rain also arrived in Paris and we mostly went from one cafe to the next, but also managed to visit Notre Dame.

Conclusio

The next DebConf will be in Argentina and I think its likely that DebConf 27 will also not happen anywhere in trainvelling distance. But even if, I think the Mini DebConfs are more my style of happening (there is one planned in Hamburg next spring, and a couple of days ago I learned that there will be a Back to the Future musical show in Hamburg during that time). Nonetheless I had a nice time and I stumbled over some projects I might get more involved in. Thanks also to my travel buddies who put up with me 😋

26 July, 2025 05:28AM

hackergotchi for Matthew Palmer

Matthew Palmer

Object deserialization attacks using Ruby's Oj JSON parser

tl;dr: there is an attack in the wild which is triggering dangerous-but-seemingly-intended behaviour in the Oj JSON parser when used in the default and recommended manner, which can lead to everyone’s favourite kind of security problem: object deserialization bugs! If you have the oj gem anywhere in your Gemfile.lock, the quickest mitigation is to make sure you have Oj.default_options = { mode: :strict } somewhere, and that no library is overwriting that setting to something else.

Prologue

As a sensible sysadmin, all the sites I run send me a notification if any unhandled exception gets raised. Mostly, what I get sent is error-handling corner cases I missed, but now and then… things get more interesting.

In this case, it was a PG::UndefinedColumn exception, which looked something like this:

PG::UndefinedColumn: ERROR:  column "xyzzydeadbeef" does not exist

This is weird on two fronts: firstly, this application has been running for a while, and if there was a schema problem, I’d expect it to have made itself apparent long before now. And secondly, while I don’t profess to perfection in my programming, I’m usually better at naming my database columns than that.

Something is definitely hinky here, so let’s jump into the mystery mobile!

The column name is coming from outside the building!

The exception notifications I get sent include a whole lot of information about the request that caused the exception, including the request body. In this case, the request body was JSON, and looked like this:

{"name":":xyzzydeadbeef", ...}

The leading colon looks an awful lot like the syntax for a Ruby symbol, but it’s in a JSON string. Surely there’s no way a JSON parser would be turning that into a symbol, right? Right?!?

Immediately, I thought that that possibly was what was happening, because I use Sequel for my SQL database access needs, and Sequel treats symbols as database column names. It seemed like too much of a coincidence that a vaguely symbol-shaped string was being sent in, and the exact same name was showing up as a column name.

But how the flying fudgepickles was a JSON string being turned into a Ruby symbol, anyway? Enter… Oj.

Oj? I barely know… aj

A long, long time ago, the “standard” Ruby JSON library had a reputation for being slow. Thus did many competitors flourish, claiming more features and better performance. Strong amongst the contenders was oj (for “Optimized JSON”), touted as “The fastest JSON parser and object serializer”. Given the history, it’s not surprising that people who wanted the best possible performance turned to Oj, leading to it being found in a great many projects, often as a sub-dependency of a dependency of a dependency (which is how it ended up in my project).

You might have noticed in Oj’s description that, in addition to claiming “fastest”, it also describes itself as an “object serializer”. Anyone who has kept an eye on the security bug landscape will recall that “object deserialization” is a rich vein of vulnerabilities to mine. Libraries that do object deserialization, especially ones with a history that goes back to before the vulnerability class was well-understood, are likely to be trouble magnets.

And thus, it turns out to be with Oj.

By default, Oj will happily turn any string that starts with a colon into a symbol:


>> require "oj"
>> Oj.load('{"name":":xyzzydeadbeef","username":"bob","answer":42}')
=> {"name"=>:xyzzydeadbeef, "username"=>"bob", "answer"=>42}

How that gets exploited is only limited by the creativity of an attacker. Which I’ll talk about more shortly – but first, a word from my rant cortex.

Insecure By Default is a Cancer

While the object of my ire today is Oj and its fast-and-loose approach to deserialization, it is just one example of a pervasive problem in software: insecurity by default. Whether it’s a database listening on 0.0.0.0 with no password as soon as its installed, or a library whose default behaviour is to permit arbitrary code execution, it all contributes to a software ecosystem that is an appalling security nightmare.

When a user (in this case, a developer who wants to parse JSON) comes across a new piece of software, they have – by definition – no idea what they’re doing with that software. They’re going to use the defaults, and follow the most easily-available documentation, to achieve their goal. It is unrealistic to assume that a new user of a piece of software is going to do things “the right way”, unless that right way is the only way, or at least the by-far-the-easiest way.

Conversely, the developer(s) of the software is/are the domain experts. They have knowledge of the problem domain, through their exploration while building the software, and unrivalled expertise in the codebase.

Given this disparity in knowledge, it is tantamount to malpractice for the experts – the developer(s) – to off-load the responsibility for the safe and secure use of the software to the party that has the least knowledge of how to do that (the new user).

To apply this general principle to the specific case, take the “Using” section of the Oj README. The example code there calls Oj.load, with no indication that this code will, in fact, parse specially-crafted JSON documents into Ruby objects. The brand-user user of the library, no doubt being under pressure to Get Things Done, is almost certainly going to look at this “Using” example, get the apparent result they were after (a parsed JSON document), and call it a day.

It is unlikely that a brand-new user will, for instance, scroll down to the “Further Reading” section, find the second last (of ten) listed documents, “Security.md”, and carefully peruse it. If they do, they’ll find an oblique suggestion that parsing untrusted input is “never a good idea”. While that’s true, it’s also rather unhelpful, because I’d wager that by far the majority of JSON parsed in the world is “untrusted”, in one way or another, given the predominance of JSON as a format for serializing data passing over the Internet. This guidance is roughly akin to putting a label on a car’s airbags that “driving at speed can be hazardous to your health”: true, but unhelpful under the circumstances.

The solution is for default behaviours to be secure, and any deviation from that default that has the potential to degrade security must, at the very least, be clearly labelled as such. For example, the Oj.load function should be named Oj.unsafe_load, and the Oj.load function should behave as the Oj.safe_load function does presently. By naming the unsafe function as explicitly unsafe, developers (and reviewers) have at least a fighting chance of recognising they’re doing something risky. We put warning labels on just about everything in the real world; the same should be true of dangerous function calls.

OK, rant over. Back to the story.

But how is this exploitable?

So far, I’ve hopefully made it clear that Oj does some Weird Stuff with parsing certain JSON strings. It caused an unhandled exception in a web application I run, which isn’t cool, but apart from bombing me with exception notifications, what’s the harm?

For starters, let’s look at our original example: when presented with a symbol, Sequel will interpret that as a column name, rather than a string value. Thus, if our “save an update to the user” code looked like this:


# request_body has the JSON representation of the form being submitted
body = Oj.load(request_body)
DB[:users].where(id: user_id).update(name: body["name"])

In normal operation, this will issue an SQL query along the lines of UPDATE users SET name='Jaime' WHERE id=42. If the name given is “Jaime O’Dowd”, all is still good, because Sequel quotes string values, etc etc. All’s well so far.

But, imagine there is a column in the users table that normally users cannot read, perhaps admin_notes. Or perhaps an attacker has gotten temporary access to an account, and wants to dump the user’s password hash for offline cracking. So, they send an update claiming that their name is :admin_notes (or :password_hash).

In JSON, that’ll look like {"name":":admin_notes"}, and Oj.load will happily turn that into a Ruby object of {"name"=>:admin_notes}. When run through the above “update the user” code fragment, it’ll produce the SQL UPDATE users SET name=admin_notes WHERE id=42. In other words, it’ll copy the contents of the admin_notes column into the name column – which the attacker can then read out just by refreshing their profile page.

But Wait, There’s More!

That an attacker can read other fields in the same table isn’t great, but that’s barely scratching the surface.

Remember before I said that Oj does “object serialization”? That means that, in general, you can create arbitrary Ruby objects from JSON. Since objects contain code, it’s entirely possible to trigger arbitrary code execution by instantiating an appropriate Ruby object. I’m not going to go into details about how to do this, because it’s not really my area of expertise, and many others have covered it in detail. But rest assured, if an attacker can feed input of their choosing into a default call to Oj.load, they’ve been handed remote code execution on a platter.

Mitigations

As Oj’s object deserialization is intended and documented behaviour, don’t expect a future release to make any of this any safer. Instead, we need to mitigate the risks. Here are my recommended steps:

  1. Look in your Gemfile.lock (or SBOM, if that’s your thing) to see if the oj gem is anywhere in your codebase. Remember that even if you don’t use it directly, it’s popular enough that it is used in a lot of places. If you find it in your transitive dependency tree anywhere, there’s a chance you’re vulnerable, limited only by the ingenuity of attackers to feed crafted JSON into a deeply-hidden Oj.load call.
  2. If you depend on oj directly and use it in your project, consider not doing that. The json gem is acceptably fast, and JSON.parse won’t create arbitrary Ruby objects.
  3. If you really, really need to squeeze the last erg of performance out of your JSON parsing, and decide to use oj to do so, find all calls to Oj.load in your code and switch them to call Oj.safe_load.
  4. It is a really, really bad idea to ever use Oj to deserialize JSON into objects, as it lacks the safety features needed to mitigate the worst of the risks of doing so (for example, restricting which classes can be instantiated, as is provided by the permitted_classes argument to Psych.load). I’d make it a priority to move away from using Oj for that, and switch to something somewhat safer (such as the aforementioned Psych). At the very least, audit and comment heavily to minimise the risk of user-provided input sneaking into those calls somehow, and pass mode: :object as the second argument to Oj.load, to make it explicit that you are opting-in to this far more dangerous behaviour only when it’s absolutely necessary.
  5. To secure any unsafe uses of Oj.load in your dependencies, consider setting the default Oj parsing mode to :strict, by putting Oj.default_options = { mode: :strict } somewhere in your initialization code (and make sure no dependencies are setting it to something else later!). There is a small chance that this change of default might break something, if a dependency is using Oj to deliberately create Ruby objects from JSON, but the overwhelming likelihood is that Oj’s just being used to parse “ordinary” JSON, and these calls are just RCE vulnerabilities waiting to give you a bad time.

Is Your Bacon Saved?

If I’ve helped you identify and fix potential RCE vulnerabilities in your software, or even just opened your eyes to the risks of object deserialization, please help me out by buying me a refreshing beverage. I would really appreciate any support you can give. Alternately, if you’d like my help in fixing these (and many other) sorts of problems, I’m looking for work, so email me.

26 July, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

July 23, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.16 on CRAN: Regular Update

The sixteenth release of the qlcal package arrivied at CRAN today, once again following the QuantLib 1.39 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases mainly synchronizes qlcal with the QuantLib release 1.39.

Changes in version 0.0.16 (2025-07-23)

  • Synchronized with QuantLib 1.39 released today

  • Calendar updates for Israel, minor utility functions update

  • Minor package maintenance updates

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

23 July, 2025 08:45PM

Abhijith PA

Removing spams from your local maildir

I have been using Disroot as my primary email ever since openmailbox.org stopped. I am very grateful for Disroot’s service and I occasionally donate to them.

Recently, my Disroot inbox has been flooded with spam. On an average day, I used to receive around 90% spams on entire email count. However, the situation has improved since then. I contacted the Disroot team, and they informed me that they are aware of the situation and planning to migrate to Rspamd from Spamassassin.

I don’t know whether they deployed Rspamd, even if so that only going to process incoming mails, I am looking for a way to identify spams and purge that are already entered my Imap folders.

Later I found this script nh2/rspamd-move[1], which seems fit my need.

I made couple of trivial changes in the script for my use case. I wasn’t sure of directly running this on my Mail/ dir, so I cloned my entire local mail directory to another directory and made available to podman container where I script and rspamd instance exist. I trained rspamd from the /Spam. Later, I manually moved couple of mails to /spam folder/. I requested friends to share their spam folder in the #debian-in channel, but that didn’t happen :P

$podman run -it --mount
type=bind,source=/home/abhijith/$MAILS/,target=/container-mail-clone
id:latest
$script.py

(It took some time since I have around 10000+ emails)

Wow, it was quite a successful attempt, I was able to catch most of it and move to spam/ and couple of false positive in a different folder. Now I want to do the same in the actual maildir yet very skeptical. While going through the cloned folder with mutt -f I remembered that the mails are already indexed by notmuch.

So all I need to do is operate tagging and deletion with notmuch and it will be synced back to the original mail dir. Ta-da. I cleaned by Inbox.

[1] - https://github.com/nh2/rspamd-move

23 July, 2025 08:26AM

July 22, 2025

Iustin Pop

Watching website scanning bots

Ever since I put up http://demo.corydalis.io, and setup logcheck, I’m inadvertently keeping up with recent exploits in common CMS frameworks, or maybe even normal web frameworks issues, by seeing what 404s I get from the logs.

Now, I didn’t indent to do this per se, I just wanted to make sure I don’t have any 500s, and at one point, I did actually catch a bug by seeing seemingly valid URLs, with referrer my own pages, leading to 404s. But besides that, it’s mainly a couple times per week, a bot finds the site, and then it tries in fast succession something like this (real log entries, with the source IP address removed):

[21/Jul/2025:09:27:09 +0200] "GET /pms?module=logging&file_name=../../../../../../~/.aws/credentials&number_of_lines=10000 HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /admin/config?cmd=cat+/root/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.local HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:16 +0200] "GET /.env.dev HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:17 +0200] "GET /.env.development HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.prod HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.stage HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:22 +0200] "GET /.env.test HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:23 +0200] "GET /.env.example HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:25 +0200] "GET /.env.bak HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:26 +0200] "GET /.env.old HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /.envs/.production/.django HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /blog.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:31 +0200] "GET /wp-content/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:32 +0200] "GET /application/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:34 +0200] "GET /app/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:35 +0200] "GET /apps/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:37 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:38 +0200] "GET /config/config.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:40 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:41 +0200] "GET /api/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:43 +0200] "GET /vendor/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:44 +0200] "GET /backend/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /server/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /home/user/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:49 +0200] "GET /aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:50 +0200] "GET /.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /.aws/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /config/aws.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /config/aws.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:58 +0200] "GET /config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:59 +0200] "GET /config/config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:01 +0200] "GET /config/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:02 +0200] "GET /config/secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yaml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:07 +0200] "GET /config.py HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:08 +0200] "GET /secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:10 +0200] "GET /secrets.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:11 +0200] "GET /credentials.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:13 +0200] "GET /.git-credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:14 +0200] "GET /.git/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:16 +0200] "GET /.gitignore HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:18 +0200] "GET /.gitlab-ci.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:19 +0200] "GET /.github/workflows HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.idea/workspace.xml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.vscode/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.override.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.prod.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.dev.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /phpinfo.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /info.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/laravel.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/app.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:49 +0200] "GET /debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:51 +0200] "GET /error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:53 +0200] "GET /.DS_Store HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:55 +0200] "GET /backup.zip HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:58 +0200] "GET /.backup HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:00 +0200] "GET /db.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:03 +0200] "GET /dump.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:06 +0200] "GET /database.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:09 +0200] "GET /backup.tar.gz HTTP/1.1" 404 - "" "Mozilla/5.0"

Now, this example is actually trying to catch a bit more things, but many times it’s focused on some specific thing, or two things. Here we have docker, MacOS .DS_Store (I’m not sure how that’s useful - to find more filenames?), VSCode settings, various secrets, GitHub workflows, log output, database dumps, AWS credentials, and still—I guess from the wp filename—WordPress settings. The first few years were full of WordPress scanners, now it seems it has quieted down, I haven’t seen a bot scanning 200 WP potential filenames in ages. And this bot even bothers to put in “Mozilla/5.0� as browser identification 😅.

Side-note: I don’t think the filename path in the first log entry, i.e. ../../../../../../~/, ever properly resolves to the home directory of any user. So I’m not that particular scanner ever works, but who knows? Maybe some framework does bad tilde expansion, but at least bash will not expand ~ inside a path, it seems—that path is passed as-is to an invoked command (strace confirms it).

What’s surprising here is that these are usually plain dumb scanners, from the same IP address, no concern on throttling, no attempt to hide, just 2 minutes of brute-forcing a random list of known “treasures�, then moving on. For this to be worth, it means there are still victims found using this method, sadly. Well, sometimes I get a single, one-off "GET /wp-login.php HTTP/1.1, which is strange enough it might not be a bot even, who knows. But in general, periods of activity of this type are coming and going, probably aligned with new CVEs.

And another surprising thing is that for this type of scanning to work (and I’ve seen many over the years), the website framework/configuration must allow random file download. Corydalis itself is written in Haskell, using Yesod, and it has a hardcoded (built at compile time) list of static resources it will serve. I haven’t made the switch to fully embedding in the binary, but at that point, it won’t need to read from the filesystem at all. Right now it will serve a few CSS and JS files, plus fonts, but that’s it, no arbitrary filesystem traversal. Strange that some frameworks allow it.

This is not productively spent time, but it is fun, especially seeing how this changes over time. And probably the most use anyone gets out of http://demo.corydalis.io 😄.

22 July, 2025 10:47PM