Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

May 23, 2025

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

strongSwan autopkgtests

For a while, the strongSwan Debian package had an autopktest. The initial version was proposed by Christian Ehrhardt in 2016 (presumably especially for downstream use in Ubuntu) and updated in 2019, but since then not much at least in Debian.

With the metapackage dependencies update in 6.0.0-1 I had to tune a bit the tests dependencies so they wouldn't totally fail, and I noticed the amd64 tests were failing since basically the beginning (the other architectures would pass, but because the tests wouldn't actually run at all since they rely on the isolation-machine restriction which is not available there.

So I tried to fix them, and it actually took me quite a while because I wasn't able to run the tests locally easily and the salsa CI doesn't have the isolation-machine restriction either. And some tests would pass and not other.

With some nice help from #debci, and using my newly received X13G5 I set up an autopkgtest VM and started experimenting. The 6.0.0-4 autopkgtests were failing 19 times over 20 steps, but passing one time. So it looked like a race condition, which we narrowed to the fact that starting the daemons (using invoke-rc.d which calls systemctl) is asynchronous. So depending on the load and maybe the machine, the tests would usually fail but sometime pass.

There's no easy way to make the call synchronous, so as a stopgap I added a small sleep 1 command and it fixed it for now. Tada! strongSwan has now passing autopkgtests in unstable (and testing) amd64. It's not entirely satisfying but still.

Next steps would be to add tests for the new daemon using the swanctl inteface, but that'll be for Forky (Trixie+1).

23 May, 2025 02:49PM by Yves-Alexis (corsac@debian.org)

Sven Hoexter

pflogsumm 1.1.6

Mainly relevant for the few who still run their own mail server and use Postfix + pflogsumm.

Few weeks back Jim contacted me that he's going to pick up work on pflogsumm again, and as first step wanted to release 1.1.6 to incorporate patches from the Debian package. That one is now released. Since we're already in the Trixie freeze the package is in experimental, but as usual should be fine to install manually.

Heads Up - Move to /usr/bin

I took that as an opportunity to move pflogsumm from /usr/sbin to /usr/bin! There was not really a good reason to ever have it in sbin. It's neither a system binary, nor statically linked (like in the very old days), or something that really only makes sense to be used as root. Some out there likely have custom scripts which do not rely on an adjusted PATH variable, those scripts require an update.

23 May, 2025 11:52AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

No further discussion -- I am staying with a Thinkpad keyboard.

I have been a very happy user of my two SK-8845 keyboards (one at my office, one at home) since I bought them, in 2018 and 2021 respectively. What are they, mind you?

SK-8845 keyboard)

The beautiful keyboard every Thinkpad owner knows and loves. And although I no longer use my X230 laptop that was my workhorse for several years, my fingers are spoiled.

So, both shift keys of my home keyboard have been getting flaky, and I am basically sure it’s a failure in the controller, as it does not feel to be physical. It’s time to revisit that seven year old post where I found the SK-8845.

This time, I decided to try my luck with something different. As a Emacs user, everybody knows we ought to be happy with more and more keys. In fact, I suppose many international people are now familiar with El Eternauta, true? we Emacs users would be the natural ambassadors to deal with the hand species:

Emacs users from outer space!

So… it kind-of sort-of made sense, when I saw a Toshiba-IBM keyboard being sold for quite cheap (MX$400, just over US$20) to try my luck with it:

A truly POS keyboard

This is quite an odd piece of hardware, built in 2013 according to its label. At first I was unsure whether to buy it because of the weird interface it had, but the vendor replied they would ship a (very long!) USB cable with it, so…

A weird port inside the keyboard

And a matching weird connector

As expected, connecting it to Linux led to a swift, errorless recognition:

Nothing too odd here

Within minutes of receiving the hardware, I had it hooked up and started looking at the events it generated However… the romance soon started to wane. Some of the reasons:

  • We cannot forget it is a Piece of Shit Point Of Sale keyboard. It is not intended to be a creative interface. So, the keys are ~75% the size of regular keys. My fingers have a hard time getting used to it, I keep hitting wrong keys. I know “I am holding it wrongâ€� and my muscle memory can be retrained (and I was very happy when I had the tiny 9â€� Acer Aspire One)… but still, it is not pleasant.
  • I exclusively use keyboards with a trackpad (as those found in most laptops) because I found that constantly moving my hand to the mouse and back produced me back ache. Within an hour of typing in this keyboard, the old back ache I was happy not to ever have again came back to me.
  • The pointer device has a left and a right button, but neither middle nor scroll buttons. I could generate middle clicks by setting middle emulation enabled, but the buttons are separated — it requires clicking with both thumbs, which is unelegant, to say the least.

    I remapped some of the spare keys to be mouse buttons 1–5, and it worked for middle click, but not for scroll events. Maybe I could tweak it a bit more… but I didn’t in the end.

Anyway… I’m returning it � I found an SK-8845 for sale in China for just MX$1814 (~US$90), and jumped for it… They are getting scarce! Nowadays it’s getting more common (and cheaper) to find the newer style Thinkpad keyboards, but without a trackpad � I don’t think I should stockpile on keyboards, but… no, I’m not doing that 😉

Anyway, so I’m sticking to a Thinkpad keyboard, third in a row.

23 May, 2025 05:39AM

May 22, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.4.3-1 on CRAN: Small Upstream Bug Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1251 other packages on CRAN, downloaded 39.8 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 628 times according to Google Scholar.

Conrad released a minor bugfix version yesterday which addresses corner cases with non-finite values in sparse matrices. And despite conference traveling, I managed to wrap this up and ship it to CRAN where it appeared yesterday. The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.4.3-1 (2025-05-21)

  • Upgraded to Armadillo release 14.4.3 (Filtered Espresso)

    • Fix for several corner cases involving handling of non-finite elements by sparse matrices

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

22 May, 2025 01:19PM

May 21, 2025

Russell Coker

Digital Sovereignty and Email

Running Your Own Email Srever

I run my own mail server. I have run it since about 1995, initially on a 28k8 modem connection but the connection improved as technology became cheaper and now I’m running it on a VM on a Hetzner server which is also running domains for some small businesses. I make a small amount of money running mail services for those companies but generally not enough to make it profitable. From a strictly financial basis I might be better off just using a big service, but I like having control over my own email. If email doesn’t arrive I can read the logs to find out why.

I repeatedly have issues of big services not accepting mail. The most recent is the MS services claiming that my IP has a bad ratio of good mail to spam and blocked me so I had to tunnel that through a different IP address. It seems that the way things are going is that if you run a small server companies like MS can block you even though your amount of spam is low but if you run a large scale service that is horrible for sending spam then you don’t get blocked.

For most users they just use one of the major email services (Gmail or Microsoft) and find that no-one blocks them because those providers are too big to block and things mostly work. Until of course the company decides to cancel their account.

The Latest News

The latest news is that MS is shutting down services for the International Court of Justice after a panel of ICC judges issued arrest warrants against Israeli Prime Minister Benjamin Netanyahu [1] . This is now making politicians realise the issues of email accounts hosted outside their jurisdiction.

What we need is for each independent jurisdiction to have it’s own email infrastructure, that means controlling DNS servers for their domains, commercial and government mail services on those domains, running the servers for those services on hardware located in the jurisdiction and run by people based in that jurisdiction and citizens of it. I say independent jurisdiction because there are groups like the EU which have sufficient harmony of laws to not require different services. With the current EU arrangements I don’t think it’s possible for the German government to block French people from accessing email or vice versa.

While Australia and New Zealand have a long history of cooperation there’s still the possibility of a lying asshole like Scott Morrison trying something on so New Zealanders shouldn’t feel safe using services run in Australia. Note that Scott Morrison misled his own parliamentary colleagues about what he was doing and got himself assigned as a secret minister [2] demonstrating that even conservatives can’t trust someone like him. With the ongoing human rights abuses by the Morrison government it’s easy to imagine New Zealand based organisations that protect human rights being treated by the Australian government in the way that the ICC was treated by the US government.

The Problem with Partial Solutions

Now it would be very easy for the ICC to host their own mail servers and they probably will do just that in the near future. I’m sure that there are many companies offering to set them up accounts in a hurry to deal with this (probably including some of the Dutch companies I’ve worked for). Let’s imagine for the sake of discussion that the ICC has their own private server, the US government could compel Google and MS to block the IP addresses of that server and then at least 1/3 of the EU population won’t get mail from them. If the ICC used email addresses hosted on someone else’s server then Google and MS could be compelled to block the addresses in question for the same result. The ICC could have changing email addresses to get around block lists and there could be a game of cat and mouse between the ICC and the US government but that would just be annoying for everyone.

The EU needs to have services hosted and run in their jurisdiction that are used by the vast majority of the people in the country. The more people who are using services outside the control of hostile governments the lesser the impact of bad IT policies by those hostile governments.

One possible model to consider is the Postbank model. Postbank is a bank run in the Netherlands from post offices which provides services to people deemed unprofitable for the big banks. If the post offices were associated with a mail service you could have it government subsidised providing free service for citizens and using government ID if the user forgets their password. You could also have it provide a cheap service for non-citizen residents.

Other Problems

What will the US government do next? Will they demand that Apple and Google do a remote-wipe on all phones run by ICC employees? Are they currently tracking all ICC employees by Android and iPhone services?

Huawei’s decision to develop their own phone OS was a reasonable one but there’s no need to go that far. Other governments could setup their own equivalent to Google Play services for Android and have their own localised Android build. Even a small country like Australia could get this going for the services of calendaring etc. But the app store needs a bigger market. There’s no reason why Android has to tie the app store to the services for calendaring etc. So you could have a per country system for calendaring and a per region system for selling apps.

The invasion of Amazon services such as Alexa is also a major problem for digital sovereignty. We need government controls about this sort of thing, maybe have high tariffs on the import of all hardware that can only work with a single cloud service. Have 100+% tariffs on every phone, home automation system, or networked device that is either tied to a single cloud service or which can’t work in a usable manner on other cloud services.

21 May, 2025 08:11AM by etbe

hackergotchi for Bits from Debian

Bits from Debian

EDF Platinum Sponsor of DebConf25

edf-logo

We are pleased to announce that EDF has committed to sponsor DebConf25 as a Platinum Sponsor.

EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment.

Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results.

With this commitment as Platinum Sponsor, EDF is contributing to the annual Debian Developers' Conference, directly supporting the progress of Debian and Free Software. EDF contributes to strengthening the worldwide community that collaborates on Debian projects year-round.

Thank you very much, EDF, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14th to July 19th 2025 in Brest, France, and will be preceded by DebCamp, from 7th to 13th July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors/become-a-sponsor/.

21 May, 2025 12:50AM by Sahil Dhiman

May 20, 2025

Arturo Borrero González

Wikimedia Cloud VPS: IPv6 support

Cape Town (ZA), Sea Point, Nachtansicht

Dietmar Rabich, Cape Town (ZA), Sea Point, Nachtansicht — 2024 — 1867-70 – 2, CC BY-SA 4.0

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

Wikimedia Cloud VPS is a service offered by the Wikimedia Foundation, built using OpenStack and managed by the Wikimedia Cloud Services team. It provides cloud computing resources for projects related to the Wikimedia movement, including virtual machines, databases, storage, Kubernetes, and DNS.

A few weeks ago, in April 2025, we were finally able to introduce IPv6 to the cloud virtual network, enhancing the platform’s scalability, security, and future-readiness. This is a major milestone, many years in the making, and serves as an excellent point to take a moment to reflect on the road that got us here. There were definitely a number of challenges that needed to be addressed before we could get into IPv6. This post covers the journey to this implementation.

The Wikimedia Foundation was an early adopter of the OpenStack technology, and the original OpenStack deployment in the organization dates back to 2011. At that time, IPv6 support was still nascent and had limited implementation across various OpenStack components. In 2012, the Wikimedia cloud users formally requested IPv6 support.

When Cloud VPS was originally deployed, we had set up the network following some of the upstream-recommended patterns:

  • nova-networks as the engine in charge of the software-defined virtual network
  • using a flat network topology – all virtual machines would share the same network
  • using a physical VLAN in the datacenter
  • using Linux bridges to make this physical datacenter VLAN available to virtual machines
  • using a single virtual router as the edge network gateway, also executing a global egress NAT – barring some exceptions, using what was called “dmz_cidr” mechanism

In order for us to be able to implement IPv6 in a way that aligned with our architectural goals and operational requirements, pretty much all the elements in this list would need to change. First of all, we needed to migrate from nova-networks into Neutron, a migration effort that started in 2017. Neutron was the more modern component to implement software-defined networks in OpenStack. To facilitate this transition, we made the strategic decision to backport certain functionalities from nova-networks into Neutron, specifically the “dmz_cidr” mechanism and some egress NAT capabilities.

Once in Neutron, we started to think about IPv6. In 2018 there was an initial attempt to decide on the network CIDR allocations that Wikimedia Cloud Services would have. This initiative encountered unforeseen challenges and was subsequently put on hold. We focused on removing the previously backported nova-networks patches from Neutron.

Between 2020 and 2021, we initiated another significant network refresh. We were able to introduce the cloudgw project, as part of a larger effort to rework the Cloud VPS edge network. The new edge routers allowed us to drop all the custom backported patches we had in Neutron from the nova-networks era, unblocking further progress. Worth mentioning that the cloudgw router would use nftables as firewalling and NAT engine.

A pivotal decision in 2022 was to expose the OpenStack APIs to the internet, which crucially enabled infrastructure management via OpenTofu. This was key in the IPv6 rollout as will be explained later. Before this, management was limited to Horizon – the OpenStack graphical interface – or the command-line interface accessible only from internal control servers.

Later, in 2023, following the OpenStack project’s announcement of the deprecation of the neutron-linuxbridge-agent, we began to seriously consider migrating to the neutron-openvswitch-agent. This transition would, in turn, simplify the enablement of “tenant networks” – a feature allowing each OpenStack project to define its own isolated network, rather than all virtual machines sharing a single flat network.

Once we replaced neutron-linuxbridge-agent with neutron-openvswitch-agent, we were ready to migrate virtual machines to VXLAN. Demonstrating perseverance, we decided to execute the VXLAN migration in conjunction with the IPv6 rollout.

We prepared and tested several things, including the rework of the edge routing to be based on BGP/OSPF instead of static routing. In 2024 we were ready for the initial attempt to deploy IPv6, which failed for unknown reasons. There was a full network outage and we immediately reverted the changes. This quick rollback was feasible due to our adoption of OpenTofu: deploying IPv6 had been reduced to a single code change within our repository.

We started an investigation, corrected a few issues, and increased our network functional testing coverage before trying again. One of the problems we discovered was that Neutron would enable the “enable_snat” configuration flag for our main router when adding the new external IPv6 address.

Finally, in April 2025, after many years in the making, IPv6 was successfully deployed.

Compared to the network from 2011, we would have:

  • Neutron as the engine in charge of the software-defined virtual network
  • Ready to use tenant-networks
  • Using a VXLAN-based overlay network
  • Using neutron-openvswitch-agent to provide networking to virtual machines
  • A modern and robust edge network setup

Over time, the WMCS team has skillfully navigated numerous challenges to ensure our service offerings consistently meet high standards of quality and operational efficiency. Often engaging in multi-year planning strategies, we have enabled ourselves to set and achieve significant milestones.

The successful IPv6 deployment stands as further testament to the team’s dedication and hard work over the years. I believe we can confidently say that the 2025 Cloud VPS represents its most advanced and capable iteration to date.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

20 May, 2025 01:00PM

May 17, 2025

Andrew Cater

Debian 12.11 - testing completed, images being signed and we'll be back for the next point release on ???

 All finished and wrapping up. The bug I thought was fixed has been identified on two distinct sets of hardware. There are workarounds: the most sensible is *not* to use i386 without a modeset parameter but to just use amd64 instead. amd64 works on the identical problematic hardware in question - just use 64 bit.

17 May, 2025 06:00PM by Andrew Cater (noreply@blogger.com)

Debian 12.11 testing - and we're nearly there

 Almost finished the testing we're going to do at 15:29 UTC. It's all been good - we've found that at least one of the major bug reports from 12.10 is not reproducible now. All good - and many thanks to all testers: Sledge, rattusrattus, egw, smcv (and me).

17 May, 2025 03:31PM by Andrew Cater (noreply@blogger.com)

Russell Coker

DDR4 RAM Size

I’ve been looking at computer hardware on AliExpress a lot recently and I saw an advert for a motherboard which can take 256G DDR4 RDIMMs (presumably LRDIMMs). Most web pages about DDR4 state that 128G is the largest possible. The Wikipedia page for DDR4 doesn’t state that 128G is the maximum but does have 128G as the largest size mentioned on the page.

Recently I’ve been buying 32G DDR4 RDIMMs for between $25 and $30 each. A friend can get me 64G modules for about $70 at the lowest price. If I hadn’t already bought a heap of 32G modules I’d buy some 64G modules right now at that price as it’s worth paying 40% extra to allow better options for future expansion.

Apparently the going rate for 128G modules is $300 each which is within the range for a hobbyist who has a real need for RAM. 256G modules are around $1200 each which is starting to get a big expensive. But at that price I could buy 2TB of RAM for $9600 and the computer containing it still wouldn’t be the most expensive computer I’ve bought – the laptop that cost $5800 in 1998 takes that honour when inflation is taken into account.

DDR5 RDIMMs are currently around $10/GB compared to DDR4 for $1/GB for 32G modules and DDR3 for $0.50/GB. DDR6 is supposed to be released late this year or early next year so hopefully enterprise grade systems with DDR5 RAM and DDR5 RDIMMs will be getting cheaper on ebay by the end of next year.

17 May, 2025 03:29PM by etbe

Andrew Cater

Debian 12.11 images testing - progress

 We're now well under way: Been joined by a Simon McVittie (smcv) and we're almost through testing most of the standard images. Live image testing is being worked through. All good so far without identifying problems other than mistyping :)

17 May, 2025 01:10PM by Andrew Cater (noreply@blogger.com)

John Goerzen

How to Use SSH with FIDO2/U2F Security Keys

For many years now, I’ve been using an old YubiKey along with the free tier of Duo Security to add a second factor to my SSH logins. This is klunky, and has a number of drawbacks (dependency on a cloud service and Internet among them).

I decided it was time to upgrade, so I recently bought a couple of YubiKey 5 series security keys. These support FIDO2/U2F, which make it so much easier to integrate with ssh.

But in researching how to do this, I found a lot of pages online with poor instructions. Either they didn’t explain what was going on very well, or suggested what I came to learn were insecure practices, or — most often — both.

It turns out this whole process is quite easy. But I wanted to understand how it worked.

So, I figured it out, set it up myself, and then put up a new, comprehensive page on my website: https://www.complete.org/easily-using-ssh-with-fido2-u2f-hardware-security-keys/. I hope it helps!

17 May, 2025 12:53PM by John Goerzen

Andrew Cater

20250517 - Debian point release - Bookworm 12.11 today

In Cottenham with Andy and the usual suspects. The point release update files are already on the servers - anyone can do an "apt-get update ; apt-get dist-upgrade" and update any running machine. This machine has just been upgraded and "just worked".

Here to do release testing for the images that we will end up publishing later in the day.

Expecting one more of us to turn up a bit later. Team will be working on IRC on #debian-cd




17 May, 2025 11:51AM by Andrew Cater (noreply@blogger.com)

May 16, 2025

hackergotchi for Michael Prokop

Michael Prokop

Grml 2025.05 – codename Nudlaug

Debian hard freeze on 2025-05-15? We bring you a new Grml release on top of that! 2025.05 🚀 – codename Nudlaug.

There’s plenty of new stuff, check out our official release announcement for all the details. But I’d like to highlight one feature that I particularly like: SSH service announcement with Avahi. The grml-full flavor ships Avahi, and when you enable SSH, it automatically announces the SSH service on your local network. So when f.e. booting Grml with boot option `ssh=debian`, you should be able to login on your Grml live system with `ssh grml@grml.local` and password ‘debian‘:

% insecssh grml@grml.local
Warning: Permanently added 'grml.local' (ED25519) to the list of known hosts.
grml@grml.local's password: 
Linux grml 6.12.27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.27-1 (2025-05-06) x86_64
Grml - Linux for geeks

grml@grml ~ %

Hint: grml-zshrc provides that useful shell alias `insecssh`, which is aliased to `ssh -o “StrictHostKeyChecking=no” -o “UserKnownHostsFile=/dev/null”`. Using those options, you aren’t storing the SSH host key of the (temporary) Grml live system (permanently) in your UserKnownHostsFile.

BTW, you can run `avahi-browse -d local _ssh._tcp –resolve -t` to discover the SSH services on your local network. 🤓

Happy Grml-ing!

16 May, 2025 04:42PM by mika

May 15, 2025

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

New laptop: Lenovo Thinkpad X13 Gen 5

After more than ten years on my trusted X250, and with a lot of financial help for Debian (which I really thank, more on that later), I finally jumped on a new ThinkPad, an X13 Gen 5.

The migration path was really easy: I'm doing daily backups with borg of the whole filesystems on an encrypted USB drive, so I just had to boot a live USB key on the new laptop, plug the USB drive, create the partitioning (encryption, LVM etc.) and then run borg extract. Since I'm using LABEL in the various fstab I didn't have much to change.

I actually had a small hiccup because my daily backup scripts used ProtectKernelModules and besides preventing loading modules into the running kernel, it also prevents access to /usr/lib/modules. So when restoring I didn't have any modules for the installed kernels. No big deal, I reinstalled the kernel package from the chroot and it did work just fine.

All in all it was pretty smooth.

I've started a similar page as the X250 for the X13G5 but honestly I don't think I'll have to document a lot of stuff because everything basically works out of the box. It's not really a surprise because we went a long way since 2015 and Linux kernels are really tested on a lot of hardware, including laptops these days, and Intel laptops are the most standard stuff you can find. I guess it's still rocky for ARM64 laptops (and especially Apple hardware) but the point was less to do porting work for Debian and rather beeing more efficient for the current stuff I maintain (and sometimes struggle with).

As said above, the laptop has been funded by Debian and I really thank the DPL and the Debian France treasurer for authorizing it and beeing really fast on the reimbursement.

I had already posted a long time ago about hardware funding for Debian developers. It took me quite a while but I finally managed to ask for help because I couldn't afford the hardware at this point and it was becoming problematic. This is not something which should be done lightly (Debian wouldn't have the funds) but this is definitely something which should be done if needed. Don't hesitate to ask your fellow Debian developpers about advice on this.

15 May, 2025 08:19PM by Yves-Alexis (corsac@debian.org)

May 14, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 3: A Detour into Tensorflow

To build our local voice satellite on a Debian system rather than using the ATOM Echo device we need something that can handle the wake word component; the piece that means we only send audio to the Home Assistant server for processing by whisper.cpp when we’ve detected someone is trying to talk to us.

openWakeWord seems to be one of the better ways to do this, and is well supported. However. It relies on TensorFlow Lite (now LiteRT) which is a complicated mess of machine learning code. tflite-runtime is available from PyPI, but that’s prebuilt and we’re trying to avoid that.

Despite, on initial impressions, it looking quite complicated to deal with building TensorFlow - Bazel is an immediate warning - it turns out to be incredibly simple to build your own .deb:

$ wget -O tensorflow-v2.15.1.tar.gz https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.15.1.tar.gz
…
$ tar -axf tensorflow-v2.15.1.tar.gz
$ cd tensorflow-2.15.1/
$ BUILD_NUM_JOBS=$(nproc) BUILD_DEB=y tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
…
$ find . -name *.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime-dbgsym_2.15.1-1_amd64.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime_2.15.1-1_amd64.deb

This is hiding an awful lot of complexity, however. In particular the number of 3rd party projects that are being downloaded in the background (and compiled, to be fair, rather than using binary artefacts).

We can build the main C++ wrapper .so directly with cmake, allowing us to investigate a bit more:

mkdir tf-build
cd tf-build/
cmake \
    -DCMAKE_C_FLAGS="-I/usr/include/python3.11" \
    -DCMAKE_CXX_FLAGS="-I/usr/include/python3.11" \
    ../tensorflow-2.15.1/tensorflow/lite/
cmake --build . -t _pywrap_tensorflow_interpreter_wrapper
…
[100%] Built target _pywrap_tensorflow_interpreter_wrapper
$ ldd _pywrap_tensorflow_interpreter_wrapper.so
    linux-vdso.so.1 (0x00007ffec9588000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f22d00d0000)
    libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f22cf600000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f22d00b0000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22cf81f000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f22d01d1000)

Looking at the output we can see that pthreadpool, FXdiv, FP16 + PSimd are all downloaded, and seem to have ways to point to a local copy. That seems positive.

However, there are even more hidden dependencies, which we can see if we look in the _deps/ subdirectory of the build tree. These don’t appear to be as easy to override, and not all of them have packages already in Debian.

First, the ones that seem to be available: abseil-cpp, cpuinfo, eigen, farmhash, flatbuffers, gemmlowp, ruy + xnnpack

(lots of credit to the Debian Deep Learning Team for these, and in particular Mo Zhou)

Dependencies I couldn’t see existing packages for are: OouraFFT, ml_dtypes & neon2sse.

At this point I just used the package I built with the initial steps above. I live in hope someone will eventually package this properly for Debian, or that I’ll find the time to try and help out, but that’s not going to be today.

I wish upstream developers made it easier to use system copies of their library dependencies. I wish library developers made it easier to build and install system copies of their work. pkgconf is not new tech these days (pkg-config appears to date back to 2000), and has decent support in CMake. I get that there can be issues with incompatibilities even in minor releases, or awkwardness in doing builds of multiple connected projects, but at least give me the option to do so.

14 May, 2025 05:39PM

Sven Hoexter

Disable Firefox DRM Plugin Infobar

... or how I spent my lunch break today.

An increasing amount of news outlets (hello heise.de) start to embed bullshit which requires DRM playback. Since I keep that disabled I now get an infobar that tells me that I need to enable it for this page. Pretty useless and a pain in the back because it takes up screen space. Here's the quick way how to get rid of it:

  1. Go to about:config and turn on toolkit.legacyUserProfileCustomizations.stylesheets.
  2. Go to your Firefox profile folder (e.g. ~/.mozilla/firefox/<random-value>.default/) and mkdir chrome && touch chrome/userChrome.css.
  3. Add the following to your userChrome.css file:

     .infobar[value="drmContentDisabled"] {
       display: none !important;
     }
    
  4. Restart Firefox and read news again with full screen space.

14 May, 2025 12:06PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Orbital

Orbital at NX, Newcastle in 2023

Orbital at NX, Newcastle in 2023

I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album.

I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld)

As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)


  1. The Naked and the Dead (1992)

    The Naked and the Dead by Orbital

    From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended.

    Anyway I've now written more about a song I haven't recommended than the one I did…

  2. Remind (1993)

    Remind by Orbital

    From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there.

    In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)

  3. You Lot (2004)

    From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.

  4. Beached (2000)

    Beached (Long version) by Orbital, Angelo Badalamenti

    Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.

  5. Spare Parts Express (1999)

    Spare Parts Express by Orbital

    Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later.

    Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)

  6. Attached (1994)

    Attached by Orbital

    This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

14 May, 2025 10:41AM

hackergotchi for Evgeni Golov

Evgeni Golov

running modified containers with podman

Everybody (who runs containers) knows this situation: you've been running happycontainer:stable for a while and it's been great but now something external changed and you need to adjust the code while there is still no release with the patch.

I've encountered exactly this when our Home-Assistant stopped showing the presence of our cat correctly, but we've also been discussing this at work recently.

Now the most obvious (to me?) solution would be to build a new container, based on the original one, and perform the modifications at build time. Something like this:

FROM happycontainer:stable
RUN curl … | patch -p1

But that's not interactive, and if you don't have a patch readily available, that's not what you want. (And I'll save you the idea of RUNing sed and friends to alter files!)

You could run vim inside the container, but that requires vim to be installed there in the first place. And a reasonable configuration. And…

Well, turns out podman can mount the root fs of a running container.

[root@sai ~]# podman mount homeassistant
/var/lib/containers/storage/overlay/f3ac502d97b5681989dff

And if you're running as non-root, you'll get an error:

[container@sai ~]$ podman mount homeassistant
Error: cannot run command "podman mount" in rootless mode, must execute `podman unshare` first

Luckily the solution is in the error message - use podman unshare

[container@sai ~]$ podman unshare
[root@sai ~]# podman mount homeassistant
/home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged

So in both cases (root and rootless) we get a path, which is the mounted root fs and we can edit things in there as we like.

[root@sai ~]# vi /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged/usr/src/homeassistant/homeassistant/components/surepetcare/binary_sensor.py

Once done, the container can be unmounted again, and the namespace left

[root@sai ~]# podman umount homeassistant
homeassistant
[root@sai ~]# exit
[container@sai ~]$

At this point we have modified the code inside the container, but the running process is still using the old code. If we restart the container now to restart the process, our changes will be lost.

Instead, we can commit the changes as a new layer and tag the result.

[container@sai ~]$ podman commit homeassistant docker.io/homeassistant/home-assistant:stable

And now, when we restart the container, it will use the new code with our changes 🎉

[container@sai ~]$ systemctl --user restart homeassistant

Is this the best workflow you can get? Probably not. Does it work? Hell yeah!

14 May, 2025 08:54AM by evgeni

May 13, 2025

hackergotchi for Ben Hutchings

Ben Hutchings

Report for Debian BSP near Leuven in April 2025

On 26th and 27th April we held a Debian bug-squashing party near Leuven, Belgium. Several longstanding and new Debian contributors gathered to work through some of the highest priority bugs affecting the upcoming release of Debian 13 “trixie”.

We were hosted by the Familia community centre in Tildonk. As this venue currently does not have an Internet connection, we brought a mobile hotspot and a local Debian mirror.

In attendance were:

  • Debian Developers: Ben Hutchings, Nattie Mayer-Hutchings, Kurt Roeckx, and Geert Stappers
  • New contributors: Yüce Kürüm, Louis Renuart, Arnout Vandecappelle

The new contributors were variously using Arch, Fedora, and Ubuntu, and the DDs spent some some time setting them up with Debian dvelopment environments.

The bugs we worked on included:

13 May, 2025 08:19PM by Ben Hutchings

May 12, 2025

Reproducible Builds

Reproducible Builds in April 2025

Welcome to our fourth report from the Reproducible Builds project in 2025. These monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. Lastly, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. Fifty Years of Open Source Software Supply Chain Security
  3. 4th CHAINS Software Supply Chain Workshop
  4. Mailing list updates
  5. Canonicalization for Unreproducible Builds in Java
  6. OSS Rebuild adds new TUI features
  7. Distribution roundup
  8. diffoscope & strip-nondeterminism
  9. Website updates
  10. Reproducibility testing framework
  11. Upstream patches

reproduce.debian.net

The last few months have seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, we are pleased to announce that reproduce.debian.net now tests all the Debian trixie architectures except s390x and mips64el.

The ppc64el architecture was added through the generous support of Oregon State University Open Source Laboratory (OSUOSL), and we can support the armel architecture thanks to CodeThink.


Fifty Years of Open Source Software Supply Chain Security

Russ Cox has published a must-read article in ACM Queue on Fifty Years of Open Source Software Supply Chain Security. Subtitled, “For decades, software reuse was only a lofty goal. Now it’s very real.”, Russ’ article goes on to outline the history and original goals of software supply-chain security in the US military in the early 1970s, all the way to the XZ Utils backdoor of 2024. Through that lens, Russ explores the problem and how it has changed, and hasn’t changed, over time.

He concludes as follows:

We are all struggling with a massive shift that has happened in the past 10 or 20 years in the software industry. For decades, software reuse was only a lofty goal. Now it’s very real. Modern programming environments such as Go, Node and Rust have made it trivial to reuse work by others, but our instincts about responsible behaviors have not yet adapted to this new reality.

We all have more work to do.


4th CHAINS Software Supply Chain Workshop

Convened as part of the CHAINS research project at the KTH Royal Institute of Technology in Stockholm, Sweden, the 4th CHAINS Software Supply Chain Workshop occurred during April. During the workshop, there were a number of relevant workshops, including:

The full listing of the agenda is available on the workshop’s website.


Mailing list updates

On our mailing list this month:

  • Luca DiMaio of Chainguard posted to the list reporting that they had successfully implemented reproducible filesystem images with both ext4 and an EFI system partition. They go on to list the various methods, and the thread generated at least fifteen replies.

  • David Wheeler announced that the OpenSSF is building a “glossary” of sorts in order that they “consistently use the same meaning for the same term” and, moreover, that they have drafted a definition for ‘reproducible build’. The thread generated a significant number of replies on the definition, leading to a potential update to the Reproducible Build’s own definition.

  • Lastly, kpcyrd posted to the list with a timely reminder and update on their repro-env” tool. As first reported in our July 2023 report, kpcyrd mentions that:

    My initial interest in reproducible builds was “how do I distribute pre-compiled binaries on GitHub without people raising security concerns about them”. I’ve cycled back to this original problem about 5 years later and built a tool that is meant to address this. []


Canonicalization for Unreproducible Builds in Java

Aman Sharma, Benoit Baudry and Martin Monperrus have published a new scholarly study related to reproducible builds within Java. Titled Canonicalization for Unreproducible Builds in Java, the article’s abstract is as follows:

[…] Achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that raises reproducibility success from 9.48% to 26.89% on 12,283 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.

A full PDF of their article is available from arXiv.


OSS Rebuild adds new TUI features

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io and npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild ships a text-based user interface (TUI) for viewing, launching, and debugging rebuilds. While previously requiring ownership of a full instance of OSS Rebuild’s hosted infrastructure, the TUI now supports a fully local mode of build execution and artifact storage. Thanks to Giacomo Benedetti for his usage feedback and work to extend the local-only development toolkit.

Another feature added to the TUI was an experimental chatbot integration that provides interactive feedback on rebuild failure root causes and suggests fixes.


Distribution roundup

In Debian this month:

  • Roland Clobus posted another status report on reproducible ISO images on our mailing list this month, with the summary that “all live images build reproducibly from the online Debian archive”.

  • Debian developer Simon Josefsson published another two reproducibility-related blog posts this month, the first on the topic of Verified Reproducible Tarballs. Simon sardonically challenges the reader as follows: “Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days?” After that, they also published a blog post on Building Debian in a GitLab Pipeline using their multi-stage rebuild approach.

  • Roland also posted to our mailing list to highlight that “there is now another tool in Debian that generates reproducible output, equivs”. This is a tool to create trivial Debian packages that might Depend on other packages. As Roland writes, “building the [equivs] package has been reproducible for a while, [but] now the output of the [tool] has become reproducible as well”.

  • Lastly, 9 reviews of Debian packages were added, 10 were updated and 10 were removed this month adding to our extensive knowledge about identified issues.

The IzzyOnDroid Android APK repository made more progress in April. Thanks to funding by NLnet and Mobifree, the project was also to put more time into their tooling. For instance, developers can now easily run their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming.

  • The rbuilder_setup tool can now setup the entire framework within less than five minutes. The process is configurable, too, so everything from “just the basics to verify builds” up to a fully-fledged RB environment is also possible.

  • This tool works on Debian, RedHat and Arch Linux, as well as their derivates. The project has received successful reports from Debian, Ubuntu, Fedora and some Arch Linux derivates so far.

  • Documentation on how to work with reproducible builds (making apps reproducible, debugging unreproducible packages, etc) is available in the project’s wiki page.

  • Future work is also in the pipeline, including documentation, guidelines and helpers for debugging.

NixOS defined an Outreachy project for improving build reproducibility. In the application phase, NixOS saw some strong candidates providing contributions, both on the NixOS side and upstream: guider-le-ecit analyzed a libpinyin issue. Tessy James fixed an issue in arandr and helped analyze one in libvlc that led to a proposed upstream fix. Finally, 3pleX fixed an issue which was accepted in upstream kitty, one in upstream maturin, one in upstream python-sip and one in the Nix packaging of python-libbytesize. Sadly, the funding for this internship fell through, so NixOS were forced to abandon their search.

Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading a number of versions to Debian:

  • Use the --walk argument over the potentially dangerous alternative --scan when calling out to zipdetails(1). []
  • Correct a longstanding issue where many >-based version tests used in conditional fixtures were broken. This was used to ensure that specific tests were only run when the version on the system was newer than a particular number. Thanks to Colin Watson for the report (Debian bug #1102658) []
  • Address a long-hidden issue in the test_versions testsuite as well, where we weren’t actually testing the greater-than comparisons mentioned above, as it was masked by the tests for equality. []
  • Update copyright years. []

In strip-nondeterminism, however, Holger Levsen updated the Continuous Integration (CI) configuration in order to use the standard Debian pipelines via debian/salsa-ci.yml instead of using .gitlab-ci.yml. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add armel.reproduce.debian.net to support the armel architecture. [][]
    • Add a new ARM node, codethink05. [][]
    • Add ppc64el.reproduce.debian.net to support testing of the ppc64el architecture. [][][]
    • Improve the reproduce.debian.net front page. [][]
    • Make various changes to the ppc64el nodes. [][]9[][]
    • Make various changes to the arm64 and armhf nodes. [][][][]
    • Various changes related to the rebuilderd-worker entry point. [][][]
    • Create and deploy a pkgsync script. [][][][][][][][]
    • Fix the monitoring of the riscv64 architecture. [][]
    • Make a number of changes related to starting the rebuilderd service. [][][][]
  • Backup-related:

    • Backup the rebuilder databases every week. [][][][]
    • Improve the node health checks. [][]
  • Misc:

    • Re-use existing connections to the SSH proxy node on the riscv64 nodes. [][]
    • Node maintenance. [][][]

In addition:

  • Jochen Sprickerhof fixed the risvc64 host names [] and requested access to all the rebuilderd nodes [].

  • Mattia Rizzolo updated the self-serve rebuild scheduling tool, replacing the deprecated “SSO”-style authentication with OpenIDC which authenticates against salsa.debian.org. [][][]

  • Roland Clobus updated the configuration for the osuosl3 node to designate 4 workers for bigger builds. []


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

12 May, 2025 07:00PM

May 11, 2025

Sergio Durigan Junior

Debian Bug Squashing Party Brazil 2025

With the trixie release approaching, I had the idea back in April to organize a bug squashing party with the Debian Brasil community. I believe the outcome was very positive, and we were able to tackle and fix quite a number of release-critical bugs. This is a brief report of what we did.

A remote BSP

It’s not the first time I organize a BSP: back in 2019, I helped throw another similar party in Toronto. The difference this time is that, because Brazil is a big country and (perhaps most importantly) because I’m not currently living there, the BSP had to be done online.

I’m a fan of social interactions (especially with the Brazilian community), and in my experience we usually can achieve much more when we get together in a physical place, but hey, you gotta do what you gotta do…

Most (if not all) of the folks interested in participating had busy weekdays, so it was decided that we would meet during the weekends and try to work on a few bugs over Jitsi. Nothing stopped people from working on bugs during the week as well, of course.

A tag to rule them all

We used the bsp-2025-04-brazil usertag to mark those bugs that were touched by us. You can see the full list of bugs here, although the current list (as of 2025-05-11) is smaller than the one we had by the end of April. I don’t know what happened; maybe it’s some glitch with the BTS, or maybe someone removed the usertag by mistake.

Stats

In total, we had:

  • 7 participants
  • 37 bugs handled. Of those,
  • 35 bugs fixed

The BSP officially started on 04 April 2025, and ended on 30 April 2025. I was able to attend meetings during two weekends; other people participated more sporadically.

Outcome

As I said above, the Debian Brasil community is great and very engaged in the project. Speaking more specifically about the Debian Brasil Devel group, I can say that We have contributors with strong technical skills, and I really love that we have this inclusive, extremely technical culture where debugging and understanding things is really core to pretty much all our discussions.

We already meet weekly on Thursdays to talk shop and help newcomers, so having a remote BSP with this group seemed like a logical thing to do. I’m really glad to see our results and even happier to hear positive feedback from the community during the last MiniDebConf in Maceió.

There’s some interest in organizing another BSP, this time face-to-face and during the next DebConf. I’m all for it, as I love fixing bugs and having a great time with friends. If you’re interested in attending, let me know.

Thanks, and until next time.

11 May, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

This is bits from the DPL for April.

End of 10

I am sure I was speaking in the interest of the whole project when joining the "End of 10" campaign. Here is what I wrote to the initiators:

Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.

Andreas Tille Debian Project Leader

I have some goals I would like to share with you for my second term.

Ftpmaster delegation

This splits up into tasks that can be done before and after Trixie release.

Before Trixie:

⁣1. Reducing Barriers to DFSG Compliance Checks

Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code.

This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles.

In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue.

I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me.

The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG.

⁣2. Discussing Alternatives

My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach.

Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation.

While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly.

For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles.

My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian.

⁣3. Documenting Critical Workflows

Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation − thanks in particular to Sean Whitton for his work on documenting NEW processing rules – many other important tasks carried out by the ftpmaster team remain undocumented or only partially so.

Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed.

If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved.

Once Trixie is released (hopefully before DebConf):

⁣4. Split of the Ftpmaster Team into DFSG and Archive Teams

As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:

  1. DFSG Team
  2. Archive Team (responsible for DAK maintenance and process tooling, including releases)

(Alternative name suggestions are, of course, welcome.) The primary task of the DFSG team would be the processing of the NEW queue and ensuring that packages comply with the DFSG. The Archive team would focus on maintaining DAK and handling the technical aspects of archive management.

I am aware that, in the recent past, the ftpmaster team has decided not to actively seek new members. While I respect the autonomy of each team, the resulting lack of a recruitment pipeline has led to some friction and concern within the wider community, including myself. As Debian Project Leader, it is my responsibility to ensure the long-term sustainability and resilience of our project, which includes fostering an environment where new contributors can join and existing teams remain effective and well-supported. Therefore, even if the current team does not prioritize recruitment, I will actively seek and encourage new contributors for both teams, with the aim of supporting openness and collaboration.

This proposal is not intended as criticism of the current team's dedication or achievements--on the contrary, I am grateful for the hard work and commitment shown, often under challenging circumstances. My intention is to help address the structural issues that have made onboarding and specialization difficult and to ensure that both teams are well-supported for the future.

I also believe that both teams should regularly inform the Debian community about the policies and procedures they apply. I welcome any suggestions for a more detailed description of the tasks involved, as well as feedback on how best to implement this change in a way that supports collaboration and transparency.

My intention with this proposal is to foster a more open and effective working environment, and I am committed to working with all involved to ensure that any changes are made collaboratively and with respect for the important work already being done.

I'm aware that the ideas outlined above touch on core parts of how Debian operates and involve responsibilities across multiple teams. These are not small changes, and implementing them will require thoughtful discussion and collaboration.

To move this forward, I've registered a dedicated BoF for DebConf. To make the most of that opportunity, I'm looking for volunteers who feel committed to improving our workflows and processes. With your help, we can prepare concrete and sensible proposals in advance--so the limited time of the BoF can be used effectively for decision-making and consensus-building.

In short: I need your help to bring these changes to life. From my experience in my last term, I know that when it truly matters, the Debian community comes together--and I trust that spirit will guide us again.

Please also note: we had a "Call for volunteers" five years ago, and much of what was written there still holds true today. I've been told that the response back then was overwhelming--but that training such a large number of volunteers didn't scale well. This time, I hope we can find a more sustainable approach: training a few dedicated people first, and then enabling them to pass on their knowledge. This will also be a topic at the DebCamp sprint.

Dealing with Dormant Packages

Debian was founded on the principle that each piece of software should be maintained by someone with expertise in it--typically a single, responsible maintainer. This model formed the historical foundation of Debian's packaging system and helped establish high standards of quality and accountability. However, as the project has grown and the number of packages has expanded, this model no longer scales well in all areas. Team maintenance has since emerged as a practical complement, allowing multiple contributors to share responsibility and reduce bottlenecks--depending on each team's internal policy.

While working on the Bug of the Day initiative, I observed a significant number of packages that have not been updated in a long time. In the case of team-maintained packages, addressing this is often straightforward: team uploads can be made, or the team can be asked whether the package should be removed. We've also identified many packages that would fit well under the umbrella of active teams, such as language teams like Debian Perl and Debian Python, or blends like Debian Games and Debian Multimedia. Often, no one has taken action--not because of disagreement, but simply due to inattention or a lack of initiative.

In addition, we've found several packages that probably should be removed entirely. In those cases, we've filed bugs with pre-removal warnings, which can later be escalated to removal requests.

When a package is still formally maintained by an individual, but shows signs of neglect (e.g., no uploads for years, unfixed RC bugs, failing autopkgtests), we currently have three main tools:

  1. The MIA process, which handles inactive or unreachable maintainers.
  2. Package Salvaging, which allows contributors to take over maintenance if conditions are met.
  3. Non-Maintainer Uploads (NMUs), which are limited to specific, well-defined fixes (which do not include things like migration to Salsa).

These mechanisms are important and valuable, but they don't always allow us to react swiftly or comprehensively enough. Our tools for identifying packages that are effectively unmaintained are relatively weak, and the thresholds for taking action are often high.

The Package Salvage team is currently trialing a process we've provisionally called "Intend to NMU" (ITN). The name is admittedly questionable--some have suggested alternatives like "Intent to Orphan"--and discussion about this is ongoing on debian-devel. The mechanism is intended for situations where packages appear inactive but aren't yet formally orphaned, introducing a clear 21-day notice period before NMUs, similar in spirit to the existing ITS process. The discussion has sparked suggestions for expanding NMU rules.

While it is crucial not to undermine the autonomy of maintainers who remain actively involved, we also must not allow a strict interpretation of this autonomy to block needed improvements to obviously neglected packages.

To be clear: I do not propose to change the rights of maintainers who are clearly active and invested in their packages. That model has served us well. However, we must also be honest that, in some cases, maintainers stop contributing--quietly and without transition plans. In those situations, we need more agile and scalable procedures to uphold Debian's high standards.

To that end, I've registered a BoF session for DebConf25 to discuss potential improvements in how we handle dormant packages. These discussions will be prepared during a sprint at DebCamp, where I hope to work with others on concrete ideas.

Among the topics I want to revisit is my proposal from last November on debian-devel, titled "Barriers between packages and other people". While the thread prompted substantial discussion, it understandably didn't lead to consensus. I intend to ensure the various viewpoints are fairly summarised--ideally by someone with a more neutral stance than myself--and, if possible, work toward a formal proposal during the DebCamp sprint to present at the DebConf BoF.

My hope is that we can agree on mechanisms that allow us to act more effectively in situations where formerly very active volunteers have, for whatever reason, moved on. That way, we can protect both Debian's quality and its collaborative spirit.

Building Sustainable Funding for Debian

Debian incurs ongoing expenses to support its infrastructure--particularly hardware maintenance and upgrades--as well as to fund in-person meetings like sprints and mini-DebConfs. These investments are essential to our continued success: they enable productive collaboration and ensure the robustness of the operating system we provide to users and derivative distributions around the world.

While DebConf benefits from generous sponsorship, and we regularly receive donated hardware, there is still considerable room to grow our financial base--especially to support less visible but equally critical activities. One key goal is to establish a more constant and predictable stream of income, helping Debian plan ahead and respond more flexibly to emerging needs.

This presents an excellent opportunity for contributors who may not be involved in packaging or technical development. Many of us in Debian are engineers first--and fundraising is not something we've been trained to do. But just like technical work, building sustainable funding requires expertise and long-term engagement.

If you're someone who's passionate about Free Software and has experience with fundraising, donor outreach, sponsorship acquisition, or nonprofit development strategy, we would deeply value your help. Supporting Debian doesn't have to mean writing code. Helping us build a steady and reliable financial foundation is just as important--and could make a lasting impact.

Kind regards Andreas.

PS: In April I also planted my 5000th tree and while this is off-topic here I'm proud to share this information with my fellow Debian friends.

11 May, 2025 10:00PM by Andreas Tille

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.8 on CRAN: Maintenance

Release 0.2.8 of our RcppSMC package arrived at CRAN yesterday. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is somewhat procedural and contains solely maintenance, either for items now highlighted by the R and CRAN package checks, or to package internals. We had made those changes at the GitHub repo over time since the last release two years ago, and it seemed like a good time to get them to CRAN now.

The release is summarized below.

Changes in RcppSMC version 0.2.8 (2025-05-10)

  • Updated continuous integration script

  • Updated package metadata now using Authors@R

  • Corrected use of itemized list in one manual page

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 May, 2025 04:25PM

May 09, 2025

Uwe Kleine-König

The Linux kernel's PGP Web of Trust

The Linux kernel's development process makes use of PGP. The most relevant part here is that subsystem maintainers are supposed to use signed tags in their pull requests to Linus Torvalds. As the concept of keyservers is considered broken, Konstantin Ryabitsev maintains a collection of relevant keys in a git repository.

As of today (at commit a0bc65fb27f5033beddf9d1ad97d67c353849be2) there are 602 valid keys tracked in that repository. The requirement for a key to be added there is that there must be at least one trust path from Linus Torvalds' key to this key of length at most 5 within that keyring.

Occasionally it happens that a key loses its trust paths because someone in these paths replaced their key, or keys expired. Currently this affects 2 keys.

However there is a problem on the horizon: GnuPG 2.4.x started to reject third-party key signatures using the SHA-1 hash algorithm. In general that's good, SHA-1 isn't considered secure any more for more than 20 years. This doesn't directly affect the kernel-pgpkeys repo, because the trust path checking doesn't rely on GnuPG trusting the signatures; there is a dedicated tool that parses the keyring contents and currently accepts signatures using SHA-1. Also signatures are not thrown away usually, but there are exceptions: Recently Theodore Ts'o asked to update his certificate. When Konstantin imported the updated certificate GnuPG's "cleaning" was applied which dropped all SHA-1 signatures. So Theodore Ts'o's key lost 168 signatures, among them one by Linus Torvalds on his primary UID.

That made me wonder what would be the effect on the web of trust if all SHA-1 signatures were dropped. Here are the facts:

  • There are 7976 signatures tracked in the korg-pgpkeys repo that are considered valid, 6045 of them use SHA-1.

  • Only considering the primary UID Linus Torvalds directly signed 40 public keys, 38 of these using SHA-1. One of the two keys that is still "properly" signed, doesn't sign any other key. So nearly all trust paths go through a single key.

  • When not considering SHA-1 signatures there are 485 public keys without a trust path from Linus Torvalds of length 5 or less. So today these 485 public keys would not qualify to be added to the pgpkeys git repository. Among the people being dropped are Andrew Morton, Greg Kroah-Hartman, H. Peter Anvin, Ingo Molnar, Junio C Hamano, Konstantin Ryabitsev, Peter Zijlstra, Stephen Rothwell and Thomas Gleixner.

  • The size of the kernel strong set is reduced from 358 to 94.

If you attend Embedded Recipes 2025 next week, there is an opportunity to improve the situation: Together with Ahmad Fatoum I'm organizing a keysigning session. If you want to participate, send your public key to er2025-keysigning@baylibre.com before 2025-05-12 08:00 UTC.

09 May, 2025 07:29PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.22 on CRAN: New Upstream

Version 0.0.22 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.3 of spdlog which was released this morning, and includes version 1.12.0 of fmt.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.22 (2025-05-09)

  • Upgraded to upstream release spdlog 1.15.3 (including fmt 11.2.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

09 May, 2025 06:55PM

Abhijith PA

Bug squashing party, Kochi

Last weekend, 4 people (3 DDs and 1 soon to be, hopefully in coming months) sit together for a Bug squashing party in Kochi. We fixed lot of things including my broken autopkgtest setup.

BSP-Kochi

It all began from a discussion in #debian-in of not having any BSPs in the past in India. Then twisted in to hosting a BSP by me. I fixed the dates to 3rd & 4th May to get packages migrate naturally to testing with NMUs before the hard freeze on 15th May.

Finding a venue was a huge challenge. Unlike other places, we have very limited options on hackerspaces. We also had some company spaces (if we asked), but we may have to follow their office timings and finding accommodation near by was also a challenge.

Later we decided to go with a rental apartment where could hack all night and sleep. We booked a very bare minimal apartment for 3 nights and 3 days. I updated wiki page and sent announcement.

Not even Wi-Fi was there in the apartment, so we setup everything by ourselves (DebConf style :p ). I short listed some newbie bugs, just in case if newcomers joined the party. But it was only we 4 people and Kathara who joined remotely.

We started from May 2nd night, stacked our cabin with snacks, instant noodles and drinks. Arranged beds, tables and started hacking and having discussions. My autopkgtest-lxc setup was broken. I think its related to #1017753, which got fixed magically and now I started using autopkgtest-podman.

stack

I learned

  • reportbug tool can use its own SMTP server by default
  • autoremovals can be extended if we pinged to the bug report.

On last day, we went to a nice restaurant and had food. There was a church festival nearby, so we were able to watch wonderful procession and fireworks at night.

food

All in all we managed to touch 46 bugs of which 35 is now fixed/done and 11 is open, some of this get status done when it reaches testing. It was a fun and productive weekend. More importantly we had fun.

09 May, 2025 04:46PM

May 08, 2025

Thorsten Alteholz

My Debian Activities in April 2025

Debian LTS

This was my hundred-thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4145-1] expat security update of one CVE related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [DLA 4146-1] libxml2 security update to fix two CVEs related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Bookworm.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Unstable.

This month I did a week of FD duties. I also started to work on libxmltok. Adrian suggested to also check the CVEs that might affect the embedded version of expat. Unfortunately these are a bunch of CVEs to check and the month ended before the upload. I hope to finish this in May. Last but not least I continued to work on the second batch of fixes for suricata CVEs.

Debian ELTS

This month was the eighty-first ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1411-1] expat security update to fix one CVE in Stretch and Buster related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [ELA-1412-1] libxml2 security update to fix two CVEs in Jessie, Stretch and Buster related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.

This month I did a week of FD duties.
I also started to work on libxmltok. Normally I work on machines running Bullseye or Bookworm. As the Stretch version of libxmltok needs a debhelper version of 5, which is no longer supported on Bullseye, I had to create a separate Buster VM. Yes, Stretch is becoming old. As well as with LTS I need to also check the CVEs that might affect the embedded version of expat.
Last but not least I started to work on the second batch of fixes for suricata CVEs.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

misc

This month I uploaded new packages or new upstream or bugfix versions of:

bottlerocket was my first upload via debusine. It is a really cool tool and I can only recommend everybody to give it at least a try.
I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

FTP master

This month I accepted 307 and rejected 55 packages. The overall number of packages that got accepted was 308.

08 May, 2025 12:05PM by alteholz

May 07, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

procmail versus exim filters

I’ve been using Procmail to filter mail for a long time. Reading Antoine’s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions).

My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language.

Requirements

A good first step is to look at what I'm using Procmail for:

  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories

  2. I file messages into different folders depending on the outcome of the above filters

  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules

  4. I move mailing list mail into folders, semi-automatically (see list filtering)

  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.

  6. I file a copy of some messages, the name of which is partly derived from the current calendar year

Exim Filters

I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters.

autolists

Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:

if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif

Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine).

killfile

An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:

if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif

I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement.

It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar.

external filters

With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails.

With Exim filters, we can use pipe to invoke an external program:

pipe "$home/mail/mailreaver.crm -u $home/mail/"

However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors.

Here's Exim's documentation on what happens when the external command fails:

Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.

That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message.

The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves.

removing subject tagging

Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters.

copy mail to archive folder

I can't see a way to derive a folder name from the calendar year.

next steps

Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do.

However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.


  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

07 May, 2025 10:16AM

May 06, 2025

Enrico Zini

Python-like abspath for c++

Python's os.path.abspath or Path.absolute are great: you give them a path, which might not exist, and you get a path you can use regardless of the current directory. os.path.abspath will also normalize it, while Path will not by default because with Paths a normal form is less needed.

This is great to normalize input, regardless of if it's an existing file you're needing to open, or a new file you're needing to create.

In C++17, there is a filesystem library with methods with enticingly similar names, but which are almost, but not quite, totally unlike Python's abspath.

Because in my C++ code I need to normalize input, regardless of if it's an existing file I'm needing to open or a new file I'm needing to create, here's an apparently working Python-like abspath for C++ implemented on top of the std::filesystem library:

std::filesystem::path abspath(const std::filesystem::path& path)
{
    // weakly_canonical is defined as "the result of calling canonical() with a
    // path argument composed of the leading elements of p that exist (as
    // determined by status(p) or status(p, ec)), if any, followed by the
    // elements of p that do not exist."
    //
    // This means that if no lead components of the path exist then the
    // resulting path is not made absolute, and we need to work around that.
    if (!path.is_absolute())
        return abspath(std::filesystem::current_path() / path);

    // This is further and needlessly complicated because we need to work
    // around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    unsigned retry = 0;
    while (true)
    {
        std::error_code code;
        auto result = std::filesystem::weakly_canonical(path, code);
        if (!code)
        {
            // fprintf(stderr, "%s: ok in %u tries\n", path.c_str(), retry+1);
            return result;
        }

        if (code == std::errc::no_such_file_or_directory)
        {
            ++retry;
            if (retry > 50)
                throw std::system_error(code);
        }
        else
            throw std::system_error(code);
    }

    // Alternative implementation that however may not work on all platforms
    // since, formally, "[std::filesystem::absolute] Implementations are
    // encouraged to not consider p not existing to be an error", but they do
    // not mandate it, and if they did, they might still be affected by the
    // undefined behaviour outlined in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    //
    // return std::filesystem::absolute(path).lexically_normal();
}

I added it to my wobble code repository, which is the thin repository of components I use to ease my C++ systems programming.

06 May, 2025 09:51AM

May 04, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#47: r2u at its Third Birthday

Welcome to post 47 in the $R^4 series!

r2u provides Ubuntu binaries for all CRAN packages for the R system. It started three years ago, and offers for Linux users on Ubuntu what windows and macOS users already experience: fast, easy and reliable installation of binary packages. But by integrating with the system package manager (which is something that cannot be done on those other operating systems) we can fully and completely integrate it with underlying system. External libraries are resolved as shared libraries and handled by the system package manager. This offers fully automatic installation both at the initial installation and all subsequent upgrades. R users just say, e.g., install.packages("sf") and spatial libraries proj, gdal, geotiff (as well as several others) are automatically installed as dependencies in the correct versions. And they remain installed along with sf as the system manager now knows of the dependency.

Work on r2u began as a quick weekend experiment in March 2022, and by May 4 a first release was marked in the NEWS file after a few brave alpha testers kicked tires quite happily. This makes today the third anniversary of that first release, and marks a good time to review where we are. This short post does this, and stresses three aspects: overall usage, current versions, and new developments.

Steadily Growing Usage at 42 Million Packages Shipped

r2u ships from two sites. Its main repository is at the University of Illinois campus providing ample and heavily redundant bandwidth. We remain very grateful for the sponsorship from Atlas. It also still ships from my own server though that may be discontinued or could be spotty as it is on retail fiber connectivity. As we have access to the both sets of server logs, we can tabulate and chart usage. As of yesterday, total downloads were north of 42 million with current weekly averages around 500 thousand. These are quite staggering numbers for what started as a small hobby project, and are quite humbling.

Usage is driven by deployment in continuous integration (as for example the Ubuntu-use at GitHub makes this both an easy and obvious choice), cloud computing (as it is easy to spin up Ubuntu instances, it is as easy to add r2u via four simple commands or one short script), explorative use (for example on Google Colab) or of course in general laptop, desktop, or server settings.

Current Versions

Since r2u began, we added two Ubuntu LTS releases, three annual R releases as well as multiple BioConductor releases. BioConductor support is on a ‘best-efforts’ basis motivated primarily to support the CRAN packages having dependencies. It has grown to around 500 packages and includes the top-250 by usage.

Right now, current versions R 4.5.0 and BioConductor 3.21, both released last month, are supported.

New Development: arm64

A recent change is the support of the arm64 platform. As discussed in the introductory post, it is a popular and increasingly common CPU choice seen anywhere from the Raspberry Pi 5 and it Cortex CPU to in-house cloud computing platforms (called, respectively, Graviton at AWS and Axiom at GCS), general server use via Ampere CPUs, Cortex-based laptops that start to appears and last but not least on the popular M1 to M4-based macOS machines. (For macOS, one key appeal is in use of ‘lighterweight’ Docker use as these M1 to M4 cpus can run arm64-based containers without a translation layer making it an attractive choice.)

This is currently supported only for the ‘noble’ aka 24.04 release. GitHub Actions, where we compile these packages, now also supports ‘jammy’ aka 22.04 but it may not be worth it to expand there as the current ‘latest’ release is available. We have not yet added BioConductor support but may do so. Drop us a line (maybe via an issue) if this of interest.

With the provision of arm64 binaries, we also started to make heavier use of GitHub Actions. The BioConductor 3.21 release binaries were also created there. This makes the provision more transparent as well as the configuration repo as well as the two builder repos (arm64, bioc) are public, as is of course the main r2u repo.

Summing Up

This short post summarised the current state of r2u along with some recent news. If you are curious, head over to the r2u site and try it, for example in a rocker/r2u container.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

04 May, 2025 09:04PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

04 May, 2025 03:38PM by Colin Watson

Russ Allbery

Review: The Book That Held Her Heart

Review: The Book That Held Her Heart, by Mark Lawrence

Series: Library Trilogy #3
Publisher: ACE
Copyright: 2025
ISBN: 0-593-43799-3
Format: Kindle
Pages: 367

The Book That Held Her Heart is the third and final book of the Library fantasy trilogy and a direct sequel to The Book That Broke the World. Lawrence provides a much-needed summary of the previous volumes at the start of this book (thank you to every author who does this!), but I was still struggling a bit with the blizzard of character names. I recommend reading this series entry in relatively close proximity to the other two.

At the end of the previous book, and following some rather horrific violence, the cast split into four groups. Three of those are pursuing different resolutions to the moral problem of the Library's existence. The fourth group opens the book still stuck with the series villains, who were responsible for the over-the-top morality that undermined my enjoyment of The Book That Broke the World. Lawrence follows all four groups in interwoven chapters, maintaining that complex structure through most of this book. I thought this was a questionable structural decision that made this book feel choppy, disconnected, and unnecessarily confusing.

The larger problem, though, is that this is the payoff book, the book where we find out if Lawrence is equal to the tricky ethical questions he's raised and the world-building masterpiece that The Book That Wouldn't Burn kicked off. The answer, unfortunately, is "not really." This is not a total failure; there are some excellent set pieces and world-building twists, and the characters remain likable and enjoyable to read about (although the regrettable sidelining of Livira continues). But the grand finale is weirdly conservative and not particularly grand, and Lawrence's answer to the moral questions he raised is cliched and wholly unsatisfying.

I was really hoping Lawrence was going somewhere more interesting than "Nazis bad." I am entirely sympathetic to this moral position, but so is every other likely reader of this series, and we all know how that story goes. What a waste of a compelling setup.

Sadly, "Nazis bad" isn't even a metaphor for the black-and-white morality that Lawrence first introduced at the end of the previous book. It's a literal description of the main moral thrust of this book. Lawrence introduces yet another new character and timeline so that he can write about thinly-disguised Nazis persecuting even more thinly-disguised Jews, and this conflict is roughly half this book. It's also integral to the ending, which uses obvious, stock secular sainthood as a sort of trump card to resolve ideological conflicts at the heart of the series.

This is one of the things I was worried about after I read the short stories that Lawrence published between the volumes of this series. All of them were thuddingly trite, which did not make me optimistic that Lawrence would find a sufficiently interesting answer to his moral trilemma to satisfy the high expectations created by the build-up. That is, I am sad to report, precisely the failure mode of this book. The resolution of the moral question of the series is arguably radical within the context of the prior world-building, but in a way that effectively reduces it to the boring, small-c conservative bromides of everyday reality. This is precisely the opposite of why I read fantasy, and I did not find Lawrence's arguments for it at all convincing. Neither, I think, did Lawrence, given that the critical debate takes place off camera so that he could avoid having to present the argument.

This is, unfortunately, another series where the author's reach exceeded their grasp. The world-building of The Book That Wouldn't Burn is a masterpiece that created one of the most original and compelling settings that I have read in fantasy for a long time, but unfortunately Lawrence did not have an equally original plan for how to use the setting. This is a common problem and I'm not going to judge it too harshly; it's much harder to end a series than it is to start one. I thought the occasional flashes of brilliance was worth the journey, and they continue into this book with some elaborations on the Library's mythic structure that are going to stick in my mind.

You can sense the story slipping away from the hoped-for conclusion as you read, though. The story shifts more and more away from the setting and the world-building and towards character stories, and while Lawrence's characters are fine, they're not that novel. I am happy to read about Clovis and Arpix, but I can read variations of that story in a lot of places. Livira never recovers her dynamism and drive from the first book, and there is much less beneath Yute's thoughtful calm than I was hoping to find. I think Lawrence knows that the story was not entirely working because the narrative voice becomes more strident as the morality becomes less interesting. I know of only one fantasy author who can make this type of overbearing and freighted narrative style work, and Lawrence is sadly not Guy Gavriel Kay.

This is not a bad book. It is an enjoyable adventure story on its own terms, with some moments of real beauty and awe and a handful of memorable characters, somewhat undermined by a painfully obvious and unoriginal moral frame. It's only a disappointment in the context of what came before it, and it is far from the first series conclusion that doesn't quite live up to the earlier volumes. I'm glad that I read it, and the series as a whole, and I do appreciate that Lawrence brought the whole series to a firm and at least somewhat satisfying conclusion in the promised number of volumes. But I do wish the series as a whole had been as special as the first book.

Rating: 6 out of 10

04 May, 2025 04:48AM

May 03, 2025

Russell Coker

Silly Job Titles

Many years ago I was on a programming project porting code from OS/2 1.x to NT. When I was there they suddenly decided to make a database of all people and get job titles for everyone – apparently the position description used when advertising the jobs wasn’t sufficient. When I got given a clipboard with a form to write my details I looked at what everyone else had done, It was a heap of ridiculous propaganda with everyone trying to put in synonyms for “senior” or “skillful” and listing things that they were allegedly in charge of. There were even some people trying to create impressive titles for their managers to try and suck up.

I chose the job title “coder” as the shortest and most accurate description of what I was doing. I had to confirm that yes I really did want to put a one word title and not a paragraph of frippery. Part of my intent was to mock the ridiculously long job titles used by others but I don’t think anyone realised that.

I was reminded of that company when watching a video of a Trump cabinet meeting where everyone had to tell Trump how great he is. I think that a programmer who wants to be known as a “Principal Solutions Architect of Advanced Algorithmic Systems and Digital Innovation Strategy” (suggested by ChatGPT because I can’t write such ridiculous things) is showing a Trump level of lack of self esteem.

When job titles are discussed there’s always someone who will say “what if my title isn’t impressive enough and I don’t get a pay rise”. If a company bases salaries on how impressive job titles are and not on whether people actually do good work then it’s a very dysfunctional workplace. But dysfunctional companies aren’t uncommon so it’s something you might reasonably have to do. In the company in question I could have described my work as “lead debugger” as I ended up doing most of the debugging on that project (as on many programming projects). The title “lead debugger” accurately described a significant part of my work and it’s work that is essential to project completion.

What do you think are the worst job titles?

03 May, 2025 07:40AM by etbe

Russ Allbery

Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin

Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250

The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn.

There is going to be some grumbling about the state of journalism in this review.

Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US.

I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews.

This is... not that.

It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.

Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy.

Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree.

For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either.

Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try.

Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom.

It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:

With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.

I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check.

Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power.

This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday.

As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie.

I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research.

I failed in this case, but perhaps I can serve as a warning to others.

Rating: 3 out of 10

03 May, 2025 03:56AM

May 02, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.

Korg Minilogue XD, and Zoom R8

Korg Minilogue XD, and Zoom R8

I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator.

The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:

Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

02 May, 2025 08:04PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

02 May, 2025 04:06PM by Ben Hutchings

Russ Allbery

Review: Sixteen Ways to Defend a Walled City

Review: Sixteen Ways to Defend a Walled City, by K.J. Parker

Series: Siege #1
Publisher: Orbit
Copyright: April 2019
ISBN: 0-316-27080-6
Format: Kindle
Pages: 349

Sixteen Ways to Defend a Walled City is... hm, honestly, I'm not sure what the genre of this novel is. It is a story about medieval engineering and siege weapons in a Rome-inspired secondary world that so far as I can tell is not meant to match ours. There is not a hint of magic. It's not technically a fantasy, but it's marketed like a fantasy, and it's not historical fiction nor is it attempting to be alternate history. The most common description is a fantasy of logistics, so I guess I'll go with that, as long as you understand that the fantasy here is of the non-magical sort.

K.J. Parker is a pen name for Tom Holt.

Orhan is Colonel-in-Chief of the Engineers for the Robur empire, even though he's a milkface, not a blueskin like a proper Robur. (Both of those racial terms are quite offensive.) He started out as a slave, learned a trade, joined the navy as a shipwright, and worked his way up the ranks through luck and enemy action. He's canny, practical, highly respected by his men, happy to cheat and steal to get material for his projects and wages for his people, and just wants to build literal bridges. Nice, sturdy bridges that let people get from one place to another the short way.

When this book opens, Orhan is in Classis trying to requisition some rope. He is saved from discovery of his forged paperwork by pirates burning down the warehouse that held all of the rope, and then saved from the pirates by the sorts of coincidences that seem to happen to Orhan all the time. A few subsequent discoveries about what the pirates were after, and news of another unexpected attack on the empire, make Orhan nervous enough that he takes his men to do a job as far away from the City at the heart of the empire as possible. It's just his luck to return in time to find slaughtered troops and to have to sneak his men into a City already under siege.

Sixteen Ways to Defend a Walled City is told in the first person by Orhan, with an internal justification that the reader only discovers at the end of the book. That means your enjoyment of this book is going to depend a lot on how much you like Orhan's voice. This mostly worked for me; his voice is an odd combination of chatty, self-deprecating, and brusque, and it took a bit for me to get used to it, but I came around. This book is clearly competence porn — nearly all the fun of this book is seeing what desperate plan Orhan will come up with next — so it helps that Orhan does indeed come across as competent.

The part that did not work for me was the morality. You would think from the title that would be straightforward: The City is under siege, people want to capture it and kill everyone, Orhan is on the inside, and his job is to keep them out. That would have been the morality of simplistic military fiction, but most of the appeal was in watching the problem-solving anyway.

That's how the story starts, but then Parker started dropping hints of more complexity. Orhan is a disfavored minority and the Robur who run the empire are racist assholes, even though Orhan mostly gets along with the ones who work with him closely. Orhan says a few things that make the reader wonder whether the City warrants defending, and it becomes less clear whether Orhan's loyalties were as solid as they appeared to be. Parker then offers a few moral dilemmas and has Orhan not follow them in the expected directions, making me wonder where Parker was going with the morality of this story.

And then we find out that the answer is nowhere. Parker is going nowhere. None of that setup has a payoff, and the ending is deeply unsatisfying and arguably pointless.

I am not sure this is an objective analysis. This is one of those books where I would not be surprised to see someone else praise its realism. Orhan is in some ways a more likely figure than the typical hero of a book. He likes accomplishing things, he's a cheat and a liar when that serves his purposes, he's loyal to the people he considers friends in a way that often doesn't involve consulting them about what they want, and he makes decisions mostly on vibes and stubbornness. Both his cynicism and his idealism are different types of masks; beneath both, he's an incoherent muddle. You could argue that we're all that sort of muddle, deep down, and the consistent idealists are the unrealistic (and frightening) ones, and I think Parker may be attempting exactly that argument. I know some readers like this sort of fallibly human incoherence.

But wow did I ever loathe this ending because I was not reading this book for a realistic psychological profile of an average guy. I was here for the competence porn, for the fantasy of logistics, for the experience of watching someone have a plan and get shit done. Apparently that extends to needing him to be competent at morality as well, or at least think about it as hard as he thinks about siege weapons.

One of the reasons why I am primarily a genre reader is that I don't read books for depressing psychological profiles. There are enough of those in the news. I read books to spend some time in a world better than mine, where things work out the way that they are supposed to, or at least in a way that's satisfying.

The other place where this book interfered with my vibes is that it's about a war, and a lot of Orhan's projects are finding more efficient ways to kill people. Parker takes a "war is hell" perspective, and Orhan gets deeply upset at the graphic sights of mangled human bodies that are the frequent results of his plans. I feel weird complaining about this because yes, it's good to be aware of the horrific things that we do to other people in wars, but man, I just wanted to watch some effective project management. I want to enjoy unexpected lateral thinking, appreciate the friendly psychological manipulation involved in getting a project to deliver on deadline, and watch someone solve logistical problems. Battlefields provide an endless supply of interesting challenges, but then Parker feels compelled to linger on the brutal consequences of Orhan's ideas and now I'm depressed and sickened rather than enjoying myself.

I really wanted to like this book, and for a lot of the book I did, but that ending was a bottomless pit that sucked away all my enjoyment and retroactively made the rest of the book feel worse. I so wanted Parker to be going somewhere clever and surprising, and the disappointment when none of that happened was intense. This is probably an excessively negative reaction, and I will not be surprised when other people get along with this book better than I did, but not only will I not be recommending it, I'm now rather dubious about reading any more Parker.

Followed by How to Rule an Empire and Get Away With It.

Rating: 5 out of 10

02 May, 2025 04:30AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Spending my Golden Week in boredom.

Spending my Golden Week in boredom. That's nice.

02 May, 2025 01:53AM by Junichi Uekawa

May 01, 2025

Ian Jackson

Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn’t be doing “politics”, and instead should just focus on technology.

But that’s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.

Today I’m talking about small-p politics

In this article I’m using “politics” in the very wide sense: us humans managing our disagreements with each other.

I’m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it’s impossible to be neutral because choosing not to take a stand is itself to take a stand.

Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.

Today I’m talking in more general terms about politics, power, and governance.

Many people working together always entails politics

Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.

Humans don’t always agree about everything. This is natural. Indeed, it’s healthy: to write the best code, we need a wide range of knowledge and experience.

When we can’t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.

Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.

This is all politics.

Consensus is great but always requiring it is harmful

Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.

When consensus can’t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.

If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.

This is where governance comes in.

Governance is like backups: we need to practice it

Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don’t see eye to eye.

In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system’s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.

That means we need to practice our governance processes. We can’t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we’ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.

So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.

First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full.

Governance should usually be routine and boring

When governance is working well it’s quite boring.

People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn’t reached, the committee, or elected leader, makes a decision.

Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.

Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.

Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.

Governance means deciding, not just mediating

By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn’t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It’s not governance if we’re advising or cajoling: in that case, we’re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.

If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted.

Otherwise, when the we need to overrule, we’ll find that we can’t, because we lack the collective practice.

To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants’ status, and not only on process questions.

On the autonomy of the programmer

Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.

Ultimately, it means sometimes overruling someone’s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.

But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer’s bad decisions can cause problems for many of the rest of us. We exasperate, “why won’t they just do the right thing”. This is futile. People have never “just”ed and they’re not going to start “just”ing now. So often the boot is on the other foot.

More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!)

Governance mechanisms are the answer.

(No, forking anything but the smallest project is very rarely a practical answer.)

Mitigate the consequences of decisions — retain flexibility

In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.

If we can convert the question from “how will the software always behave” into merely “what should the default be”, we can often save ourselves a lot of drama.

So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.

There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software — even crusty or buggy software — is a lot more fun than having unpleasant arguments.

But don’t do decisionmaking like a corporation

Many programmers’ experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.

They typically don’t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations’ goals are often bad.

You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable — typically the effects of their tenure are only properly felt well after they’ve left to mess up somewhere else.

We should select our leaders more wisely, and base decisions on substance.

If you won’t do politics, politics will do you

As a participant in a project, or a society, you can of course opt out of getting involved in politics.

You can opt out of learning how to do politics generally, and opt out of understanding your project’s governance structures. You can opt out of making judgements about disputed questions, and tell yourself “there’s merit on both sides”.

You can hate politicians indiscriminately, and criticise anyone you see doing politics.

If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You’re tacitly supporting the existing power bases. You’re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.

If enough people won’t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.

If you don’t see the politics, it’s still happening

If your governance systems don’t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.

Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.

So if you have a reasonable sized community, but don’t see your formal governance systems working — people debating things, votes, leadership making explicit decisions — that doesn’t mean everything is fine, and all the decisions are great, and there’s no politics happening.

It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won’t put up with that will leave.

The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.

Conclusions

  • Respect and support the people who are trying to fix things with politics.

  • Be informed, and, where appropriate, involved.

  • If you are in a position of authority, be willing to exercise that authority. Do more than just mediating to try to get consensus.



comment count unavailable comments

01 May, 2025 10:15PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

01 May, 2025 06:05PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements.

phosh

  • Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
  • Update adaptive app list (MR)
  • Fix missing icon when editing folders (MR)
  • Use StartupWMClass for better app-id matching (MR)
  • Fix failing CI tests, fix inverted logic, and add tests (MR)
  • Fix a sporadic test failure (MR)
  • Add support for "do not disturb" by adding a status page to feedback quick settings (MR)
  • monitor: Don't track make/model (MR)
  • Wi-Fi status page: Correctly show tick mark with multiple access points (MR)
  • Avoid broken icon in polkit prompts (MR)
  • Lockscreen auth cleanups (MR)
  • Sync mobile data toggle to sim lock too (MR)
  • Don't let the OSD display cover whole output with a transparent window (MR)

phoc

  • Allow to specify listening socket (MR)
  • Continue to catch up with wlroots git (MR)
  • Disconnect input-method signals on destroy (MR)
  • Disconnect gtk-shell and output signals on destroy (MR)
  • Don't init decorations too early (MR)
  • Allow to disable XWayland on the command line (MR)

phosh-mobile-settings

  • Allow to set overview wallpaper (MR)
  • Ask for confirmation before resetting favorits (MR)
  • Add separate volume controls for notifictaions, multimedia and alerts (MR)
  • Tweak warnings (MR)

pfs

  • Fix build on a single CPU (MR)

feedbackd

  • Move to fdo (MR)
  • Allow to set media-role (MR)
  • Doc updates (MR)
  • Sort LEDs by "usefulness" (MR)
  • Ensure multicolor LEDs have multiple components (MR)
  • Add example wireplumber config (MR)

feedbackd-device-themes

  • Release 0.8.2
  • Move to fdo (MR)
  • Override notification-missed-generic on fajita (MR)
  • Run ci-fairy here too (MR)
  • fajita: Add notification-missed-generic (MR)

gmobile

  • Build Vala support (vapi files) too (MR)
  • Add support for timers that can take the system out of suspend (MR)

Debian

git-buildpackage

  • Don't suppress dch errors (MR)
  • Release 0.9.38

wlroots

  • Get text-input-v3 a bit more in line with other protocols (MR)

ModemManager

  • Cell broadcast support for QMI modems (MR)

Libqmi

  • QMI channel setting (MR)
  • Switch to gi-docgen (MR)
  • loc: Fix since annotations (MR)

gnome-clocks

  • Add wakeup timer to take device out of suspend (MR)

gnome-calls

  • CallBox: Switch between text entry (for SIP) and dialpad (MR)

qmi-parse-kernel-dump

  • Allow to filer on message types and some other small improvements (MR)

xwayland-run

  • Support phoc (MR)

osmo-cbc

  • Small error handling improvements to osmo-cbc (MR)

phosh-nightly

  • Handle feedbackd fdo move (MR)

Blog posts

Bugs

  • Resuming of video streams fails with newer gstreamer (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 May, 2025 12:30PM

Paul Wise

FLOSS Activities April 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Patches: notmuch-mutt patchset

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

01 May, 2025 04:02AM

Russ Allbery

Review: Beyond Pain

Review: Beyond Pain, by Kit Rocha

Series: Beyond #3
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 328

Beyond Pain is a science fiction dystopian erotic romance novel and a direct sequel to Beyond Control. Following the romance series convention, each book features new protagonists who were supporting characters in the previous book. You could probably start here if you wanted, but there are significant spoilers here for earlier books in the series. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Six has had a brutally hard life. She was rescued from an awful situation in a previous book and is now lurking around the edges of the Sector Four gang, oddly fascinated (as are we all) with their constant sexuality and trying to decide if she wants to, and can, be part of their world. Bren is one of the few people she lets get close: a huge bruiser who likes cage fights and pain but treats Six with a protective, careful respect that she finds comforting. This book is the story of Six and Bren getting to the bottom of each other's psychological hangups while the O'Kanes start taking over Six's former sector.

Yes, as threatened, I read another entry in the dystopian erotica series because I keep wondering how these people will fuck their way into a revolution. This is not happening very quickly, but it seems obvious that is the direction the series is going.

It's been a while since I've reviewed one of these, so here's another variation of the massive disclaimer: I think erotica is harder to review than any other genre because what people like is so intensely personal and individual. This is not even an attempt at an erotica review. I'm both wholly unqualified and also less interested in that part of the book, which should lead you to question my reading choices since that's a good half of the book.

Rather, I'm reading these somewhat for the plot and mostly for the vibes. This is not the most competent collection of individuals, and to the extent that they are, it's mostly because the men (who are, as a rule, charismatic but rather dim) are willing to listen to the women. What they are good at is communication, or rather, they're good about banging their heads (and other parts) against communication barriers until they figure out a way around them. Part of this is an obsession with consent that goes quite a bit deeper than the normal simplistic treatment. When you spend this much time trying to understand what other people want, you have to spend a lot of time communicating about sex, and in these books that means spending a lot of time communicating about everything else as well.

They are also obsessively loyal and understand the merits of both collective action and in making space for people to do the things that they are the best at, while still insisting that people contribute when they can. On the surface, the O'Kanes are a dictatorship, but they're run more like a high-functioning collaboration. Dallas leads because Dallas is good at playing the role of leader (and listening to Lex), which is refreshingly contrary to how things work in the real world right now.

I want to be clear that not only is this erotica, this is not the sort of erotica where there's a stand-alone plot that is periodically interrupted by vaguely-motivated sex scenes that you can skim past. These people use sex to communicate, and therefore most of the important exchanges in the book are in the middle of a sex scene. This is going to make this novel, and this series, very much not to the taste of a lot of people, and I cannot be emphatic enough about that warning.

But, also, this is such a fascinating inversion. It's common in media for the surface plot of the story to be full of sexual tension, sometimes to the extent that the story is just a metaphor for the sex that the characters want to have. This is the exact opposite of that: The sex is a metaphor for everything else that's going on in the story. These people quite literally fuck their way out of their communication problems, and not in an obvious or cringy way. It's weirdly fascinating?

It's also possible that my reaction to this series is so unusual as to not be shared by a single other reader.

Anyway, the setup in this story is that Six has major trust issues and Bren is slowly and carefully trying to win her trust. It's a classic hurt/comfort setup, and if that had played out in the way that this story often does, Bren would have taken the role of the gentle hero and Six the role of the person he rescued. That is not at all where this story goes. Six doesn't need comfort; Six needs self-confidence and the ability to demand what she wants, and although the way Beyond Pain gets her there is a little ham-handed, it mostly worked for me. As with Beyond Shame, I felt like the moral of the story is that the O'Kane men are just bright enough to stop doing stupid things at the last possible moment. I think Beyond Pain worked a bit better than the previous book because Bren is not quite as dim as Dallas, so the reader doesn't have to suffer through quite as many stupid decisions.

The erotica continues to mostly (although not entirely) follow traditional gender roles, with dangerous men and women who like attention. Presumably most people are reading these books for the sex, which I am wholly unqualified to review. For whatever it's worth, the physical descriptions are too mechanical for me, too obsessed with the precise structural assemblage of parts in novel configurations. I am not recommending (or disrecommending) these books, for a whole host of reasons. But I think the authors deserve to be rewarded for understanding that sex can be communication and that good communication about difficult topics is inherently interesting in a way that (at least for me) transcends the erotica.

I bet I'm going to pick up another one of these about a year from now because I'm still thinking about these people and am still curious about how they are going to succeed.

Followed by Beyond Temptation, an interstitial novella. The next novel is Beyond Jealousy.

Rating: 6 out of 10

01 May, 2025 03:46AM

April 30, 2025

Russell Coker

Simon Josefsson

Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

30 April, 2025 09:25AM by simon

April 27, 2025

hackergotchi for Marco d'Itri

Marco d'Itri

On the use of SaaS in systems engineering

We want to use an hyperscaler cloud because it is cheaper to delegate operating a scalable and redundant database to an hyperscaler is something that can be debated from business and technical points of view.

We want to use an hyperscaler cloud because our developers do not want to operate a scalable and redundant database just means that you need to hire competent developers and/or system administrators.

We must stop normalizing the idea that the people whose only skill is gluing together a few dozens of AWS services can continue calling themselves developers. We should also find a sufficiently demeaning name to refer to them...

27 April, 2025 02:55PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Random IS-IS interop notes

Some random stuff about running IS-IS between FRR (on Linux) and IOS-XE (Cisco 3650 in my case):

Cisco uses the newer “key chain” idea, but FRR doesn't for IS-IS yet (it's supported for OSPF, though?), so the right way to interop seems to be:

# Cisco
key chain my-key
 key 100
   key-string password123

interface Vlan101
  ...
  isis authentication key-chain my-key

router isis
  ...
  authentication mode md5 level-2
  authentication key-chain my-key level-2

# FRR
interface vlan101
  ...
  isis password md5 password123

router isis null
  ...
  area-password md5 password123 authenticate snp validate

Simple enough stuff, except for “authenticate snp validate”; without it (or at least “authenticate snp send-only”), you'll get messages on the Cisco saying that PSNP messages failed auth.

Second, you can run “ipv6 unnumbered” (i.e., directly over link-local, no link nets needed), but “ip unnumbered” seems to... crash the Cisco? Couldn't get up the neighbor relation even with a point-to-point setting, at least. Perhaps safest to stay with a /31 link net. :-)

Also, FRR sometimes originates default route but I think this is a bug. “default-information originate ipv4 level-2 always” (and similar for ipv6) seems prudent on the upstream router.

27 April, 2025 09:51AM

April 26, 2025

John Goerzen

Memoirs of the Early Internet

The Internet is an amazing place, and occasionally you can find things on the web that have somehow lingered online for decades longer than you might expect.

Today I’ll take you on a tour of some parts of the early Internet.

The Internet, of course, is a “network of networks” and part of its early (and continuing) promise was to provide a common protocol that all sorts of networks can use to interoperate with each other. In the early days, UUCP was one of the main ways universities linked with each other, and eventually UUCP and the Internet sort of merged (but that’s a long story).

Let’s start with some Usenet maps, which were an early way to document the UUCP modem links between universities. Start with this PDF. The first page is a Usenet map (which at the time mostly flowed over UUCP) from April of 1981. Notice that ucbvax, a VAX system at Berkeley, was central to the map.

ucbvax continued to be a central node for UUCP for more than a decade; on page 5 of that PDF, you’ll see that it asks for a “Path from a major node (eg, ucbvax, devcax, harpo, duke)”. Pre-Internet email addresses used a path; eg, mark@ucbvax was duke!decvax!ucbvax!mark to someone. You had to specify the route from your system to the recipient on your email To line. If you gave out your email address on a business card, you would start it from a major node like ucbvax, and the assumption was that everyone would know how to get from their system to the major node.

On August 19, 1994, ucbvax was finally turned off. TCP/IP had driven UUCP into more obscurity; by then, it was mostly used by people without a dedicated Internet connection to get on the Internet, rather than an entire communication network of its own. A few days later, Cliff Frost posted a memoir of ucbvax; an obscurbe bit of Internet lore that is fun to read.

UUCP was ad-hoc, and by 1984 there was an effort to make a machine-parsable map to help automate routing on UUCP. This was called the pathalias project, and there was a paper about it. The Linux network administration guide even includes a section on pathalias.

Because UUCP mainly flowed over phone lines, long distance fees made it quite expensive. In 1985, the Stargate Project was formed, with the idea of distributing Usenet by satellite. The satellite link was short-lived, but the effort eventually morphed into UUNET. It was initially a non-profit, but eventually became a commercial backbone provider, and later ISP. Over a long series of acquisitions, UUNET is now part of Verizon. An article in ;login: is another description of this history.

IAPS has an Internet in 1990 article, which includes both pathalias data and an interesting map of domain names to UUCP paths.

As I was pondering what interesting things a person could do with NNCPNET Internet email, I stumbled across a page on getting FTP files via e-mail. Yes, that used to be a thing! I remember ftpmail@decwrl.dec.com.

It turns out that page is from a copy of EFF’s (Extended) Guide to the Internet from 1994. Wow, what a treasure! It has entries such as A Slice of Life in my Virtual Community, libraries with telnet access, Gopher, A Statement of Principle by Bruce Sterling, and I could go on. You can also get it as a PDF from Internet Archive.

UUCP is still included with modern Linux and BSD distributions. It was part of how I experienced the PC and Internet revolution in rural America. It lacks modern security, but NNCP is to UUCP what ssh is to telnet.

26 April, 2025 06:22PM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.4.2-1 on CRAN: Another Small Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1245 other packages on CRAN, downloaded 39.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 628 times according to Google Scholar.

A new release arriveed at CRAN yesterday with a fix for expmat() and adjustments for clang++-20. These changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.4.2-1 (2025-04-25)

  • Upgraded to Armadillo release 14.4.2 (Filtered Espresso)

    • Fix for expmat()

    • Workaround for bugs in clang 20 compiler

    • Micro-cleanup in one test file

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

26 April, 2025 01:33PM

John Goerzen

NNCPNET Can Optionally Exchange Internet Email

A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary.

Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc.

It’s a volunteer-run, free, service. Give it a try!

26 April, 2025 01:01AM by John Goerzen

April 25, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

In defence of JD Vance, death of Pope Francis

When the sad news appeared about the death of Pope Francis on Easter Monday, people were quick to politicize the tragedy with references to his last official duty, a meeting with US Vice President JD Vance.

What about that dossier on abuse that changed hands on Good Friday? It fits an awkward pattern of dossiers and deaths.

Obituaries and commentaries have appeared far and wide. Many of them begin by praising the late Pope's work on climate and the poor and then they go on to politely but firmly express concern that he could have done more for victims of abuse. We need to look at the other Francis.

Walter Francis Pocock

Walter Francis Pocock was born on 28 February 1938. He went to the former St Patrick's college in East Melbourne and then became a Catholic school teacher. He rose through the ranks of schoolteachers to become a headmaster and then he went on to work in administration at the Catholic Education Office (CEO), part of the Archdiocese of Melbourne.

On 28 February 1998, Pope Francis had become Archbishop of Buenos Aires, on the birthday of Walter Francis Pocock. On 28 February 2013, Pope Benedict XVI resigned, creating the opportunity for Pope Francis to become Pope.

The brother of Walter Francis Pocock is my father, who attended Xavier College, one of Australia's most prominent Jesuit schools. Pope Francis was the first Jesuit pope. Dad and his brother both worked at the Catholic Eduction Office.

My uncle died in 2011. One of the most visible parts of his legacy is the work of his daughters. Bernice became a nurse back in the 1990s. She has worked her way up to the top of her profession, becoming the Health Complaints Commissioner for the State of Victoria.

In that role, she personally signs each of the prohibition orders.

Australia's Royal Commission into Institutional Abuse published a huge archive of internal documents from the Catholic Church. Looking through those documents, we can see that the church removed fewer abusers in fifty years than Bernice has removed in just two years.

The connections don't stop there of course. Pope Francis had a special relationship with the State of Victoria, having invited Cardinal George Pell, who started his career in Ballarat, to be the treasurer of the Vatican.

Let's have a fresh look at the history and how it intersects with deaths of both Cardinal Pell and Pope Francis.

17 April 2011, the day that Adrian von Bidder died, was both the day that Carla and I married and it was Palm Sunday, the beginning of Holy Week.

In December 2018, Cardinal Pell was convicted of abuse, although he was subsequently acquitted on appeal. On one of the most important Christian holidays, the night before Christmas, the Debianists went nuts spreading rumors about my family and abuse. Some of them are still nuts today. Sadly, more of these people have died.

In my last call with Dad, he was clearly disturbed about the Pell situation and the proximity of our family to these matters.

On 17 April 2019, which was Holy Wednesday, the current Archbishop of Melbourne put out a public statement about the grief many people felt throughout the Archdiocese (original video). This was particularly profound for those who worked for the church. People like my father and his brother would see Cardinal Pell in the office from time to time when he was Archbishop of Melbourne. It would have been even more inconvenient for my father as this statement came out on our wedding anniversary.

Dad died on 20 April 2019, which was Easter Saturday.

Pope Benedict died on 31 December 2022, once again, in the Christmas season. Cardinal Pell appeared in news reports and that prompted me to have a fresh look at the evidence and see if I had missed anything.

By pure coincidence, I found myself in the office of the Carabinieri on 10 January 2023, as a witness and survivor talking about the blackmail in Debianism and the similarities to what I observed in the wider context of institutional abuse. I handed over a dossier of approximately 90 pages. Cardinal Pell's name was mentioned somewhere on the first page. I didn't know that the Cardinal was having surgery in the same hour. He died later that evening.

I created a further dossier, a similar size, containing emails from debian-private suggesting that some of the Debian suicides and accidental deaths may have been avoidable. That dossier was sent to the Cambridgeshire coroner on 9 September 2023, the first day of DebConf23. In the middle of the conference, Abraham Raji died in an avoidable accident. I had mentioned Cardinal Pell's name in the email to the coroner and it appears again inside the dossier.

This year, in March 2025, I published a blog pointing out the connection between Adrian von Bidder's death and Palm Sunday. The anniversary of the death was Holy Thursday, the day that Judas betrayed Jesus at the last supper. I previously wrote a blog about that phenomena too.

At the same time as publishing the blog about Palm Sunday, I had been preparing a new dossier about the intersection of the abuse crisis with the health system. It dealt with the cases I'm familiar with, mental health patients, the military, a priest using my name and the remarkable similarities that have emerged in modern-day cults like Debianism.

Pope Francis is featured on the final page, along with the Swiss Guard. The name of Cardinal Pell was repeated forty seven times in this dossier. I included a couple of pictures of the late Cardinal Pell, in one of them he is holding a galero and in another he is wearing his zucchetto and staring at Shadow Man.

Cardinal George Pell, Shadow Man, zucchetto, Fedora

 

The draft was handed over to an expert in the French health system on Good Friday, the day that Jesus was crucified.

Three dossiers, three deaths.

Like my father, I also graduated from the Jesuit school Xavier College.

Some of the dossier is confidential so I'm only going to share the pages with the conclusion. Even that had to be redacted. Nonetheless, it is telling that I used a quote from Ian Lawther about his Fair dinkum letter to the Pope. The letter is published on the web site of the Parliament of Victoria. The letter was dated 24 April 2008 and it appears to have been sent two days later, 26 April 2008. 26 April 2025 is the funeral of Pope Francis.

Ian Lawther's letter is a chilling indictment on the culture of many institutions, this is not only about the crisis in the Catholic Church. Reading this paragraph:

You were not the one who had to take your son to hospital at 2.30 in the morning, because he had broken three bones in his hand, in a fit of anger and guilt, because his mind had been so poisoned, that the priest was able to convince him everything was his fault.

I couldn't help thinking about the way people are being brainwashed in groups like Debian and GNOME. For example, look at how Dr Norbert Preining was brainwashed to write a self-deprecating public confession. Look at the discussions around GNOME this week, for example, Tobias Bernard writes nobody involved is against Codes of Conduct, in other words, victims can't see that amateur-hour Codes of Conduct are nothing more than a tool for denouncing, dividing and silencing people. When people put these amateur hour Codes of Conduct on a pedestal, the victims end up tying themselves in nots. There have been a number of unexplained suicides in the open source software ecosystem and I can't help wondering if some of those people had been shamed with Code of Conduct gaslighting. Look at the horrible message they sent to my last Outreachy intern, it took three months before she told me what they were doing to her.

Read the final pages of the dossier that changed hands on Good Friday.

Can we say JD Vance is off the hook?

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Cardinal George Pell, Abraham Raji, Pope Francis

Fact check

Did the Debianists graffiti Dr Jacob Appelbaum's home when they were falsifying abuse accusations against him?

Compare the graffiti to the signature on real abuse cases: the prohibition orders.

When Debianists and social media addicts make stuff up, it makes it harder to believe real victims.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

 

 

Jacob Appelbaum, graffiti, rapist

25 April, 2025 09:00PM

Simon Josefsson

GitLab Runner with Rootless Privilege-less Capability-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.

Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode and without the --privileged flag, without any additional capabilities like SYS_ADMIN. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I’m waiting for you! I wouldn’t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any riscv64 hardware that can run a libre OS, all of them appear to require non-free blobs and usually a non-mainline kernel.

  • Login on console using username ‘ubuntu‘ and password ‘ubuntu‘. You will be asked to change the password, so do that.
  • Start a terminal, gain root with sudo -i and change the hostname:
    echo jas-p550-01 > /etc/hostname
  • Connect ethernet and run: apt-get update && apt-get dist-upgrade -u.
  • If your system doesn’t have valid MAC address (they show as MAC ‘8c:00:00:00:00:00 if you run ‘ip a’), you can fix this to avoid collisions if you install multiple P550’s on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
  • For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
  • Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
  • With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr

Now with a reasonable setup ready, let’s install the GitLab Runner. The following is adapted from gitlab-runner’s official installation instructions documentation. The normal installation flow doesn’t work because they don’t publish riscv64 apt repositories, so you will have to perform upgrades manually.

# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb

Remember the NVMe device? Let’s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.

gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner

Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project’s Settings -> CI/CD -> Runners -> New project runner. I used the tags ‘riscv64‘ and a runner description of the hostname.

gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable

We install and configure gitlab-runner to use podman, and to use non-root user.

apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner

You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?

# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket

We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.

...
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"

Note that unlike the documentation I do not add the ‘privileged = true‘ parameter here. I will come back to this later.

Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.

dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set

Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:

journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service

To stop the graphical environment and disable some unnecessary services, you can use:

systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord

At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!

I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian ‘aardvark-dns‘ binary instead.

wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian

My setup uses podman in rootless mode without passing the –privileged parameter or any –add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:

Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1

According to GitLab runner security considerations, you should not enable the ‘privileged = true’ parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, and I suppose running podman as root would too.

[[runners]]
[runners.docker]
privileged = true

Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.

[[runners]]
[runners.docker]
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]

Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest –isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.

Happy Riscv64 Building!

Update 2025-05-05: I was able to make it work without the SYS_ADMIN capability too, with a GitLab /etc/gitlab-runner/config.toml like the following:

[[runners]]
  [runners.docker]
    privileged = false
    devices = ["/dev/fuse"]

And passing --isolation chroot to Buildah like this:

buildah build --isolation chroot -t $CI_REGISTRY_IMAGE:name image/

I’ve updated the blog title to add the word “capability-less” as well. I’ve confirmed that the same recipe works on podman on a ppc64el platform too. Remaining loop-holes are escaping from the chroot into the non-root gitlab-runner user, and escalating that privilege to root. The /dev/fuse and sub-uid/gid may be privilege escalation vectors here, otherwise I believe you’ve found a serious software security issue rather than a configuration mistake.

25 April, 2025 06:30PM by simon

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!

Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2025 page.

Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.

The new term for the project leader started on April 21st and will expire on April 20th 2026.

25 April, 2025 10:05AM by Jean-Pierre Giraud

April 24, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I’d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I’ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just “Download our prebuilt components”. While I admire the efforts to get Home Assistant fully packaged for Debian I accept that’s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have “voice satellites” in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.

This is the start of a write-up of that. I’ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let’s start with some requirements:

  • All local processing; no call-outs to external services
  • Ability to have multiple voice satellites in the house
  • A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
  • As clean an install on a Debian stable based system as possible
  • Binaries built locally
  • No need for a GPU

My house server is an AMD Ryzen 7 5700G, so my expectation was that I’d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I’m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I’ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I’m missing.

Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don’t care about what’s under the hood, there are a bunch of more user friendly details on Home Assistant’s site itself, and they have pre-built images you can just deploy.

My first step was sorting out a “voice satellite”. This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I’d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn’t got around to setting up.

Here, we ignore a bit about delving into exactly what’s going on under the hood, even if we’re compiling locally. This is a constrained embedded device and while I’m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it’s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to “Hey Jarvis” rather than the default “Okay Nabu”, and that was a good reason to bother doing a local build. We’ll get into actually building a voice satellite on Debian in later posts.

I started with the default upstream assistant config and tweaked it a little for my setup:

diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: ${name}
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:


(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)

It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:

Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.


And then you can watch it boot (this is mine already configured up in Home Assistant):

Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 

[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 

[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set

[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6

[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32

[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6

[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6

[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760

[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760

[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440

[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled

[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled

[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06

[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5

[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4

[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD


That’s enough to get a voice satellite that can be configured up in Home Assistant; you’ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64 to generate mine).

If you’re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it’ll send audio from that port to Home Assistant, then receive back audio to the same port).

At this point you can now shout “Hey Jarvis, what time is it?” at the Echo, and the white light will turn flashing blue (indicating it’s heard the wake word). Which means we’re ready to teach Home Assistant how to do something with the incoming audio.

24 April, 2025 06:34PM

April 23, 2025

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

23 April, 2025 01:36PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

hackergotchi for Michael Prokop

Michael Prokop

Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

23 April, 2025 06:11AM by mika

Russell Coker

Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

23 April, 2025 05:11AM by etbe

April 22, 2025

hackergotchi for Joey Hess

Joey Hess

offgrid electric car

Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.

My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:

first feeble charging offgrid

It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.

My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).

me standing in front of solar fence

first charging from solar fence

Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.

When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.

charging up gradually through the month of November

This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.

December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.

bleak December charging

Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.

Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.

That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.

My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.

But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!


By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.

And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.

January
February
March
April

22 April, 2025 04:50PM