Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

February 21, 2026

hackergotchi for Jonathan Dowland

Jonathan Dowland

Lanzarote

I want to get back into the habit of blogging, but I've struggled. I've had several ideas of topics to try and write about, but I've not managed to put aside the time to do it. I thought I'd try and bash out a one-take, stream-of-conciousness-style post now, to get back into the swing.

I'm writing from the lounge of my hotel room in Lanzarote, where my family have gone for the School break. The weather at home has been pretty awful this year, and this week is traditionally quite miserable at the best of times. It's been dry with highs of around 25℃ .

It's been an unusual holiday in one respect: one of my kids is struggling with Autistic Burnout. We were really unsure whether taking her was a good idea: and certainly towards the beginning of the holiday felt we may have made a mistake. Writing now, at the end, I'm not so sure. But we're very unlikely to have anything resembling a traditional summer holiday for the foreseeable.

Managing Autistic Burnout and the UK ways the UK healthcare and education systems manage it (or fail to) has been a huge part of my recent life. Perhaps I should write more about that. This coming week the government are likely to publish plans for reforming Special Needs support in Education. Like many other parents, we wait in hope and fear to see what they plan.

In anticipation of spending a lot of time in the hotel room with my preoccupied daughter I (unusually) packed a laptop and set myself a nerd-task: writing a Pandoc parser ("reader") for the MoinMoin Wiki markup language. There's some unfinished prior art from around 2011 by Simon Michael (of hledger) to work from.

The motivation was our plan to migrate the Debian Wiki away from MoinMoin. We've since decided to approach that differently but I might finish the Reader anyway, it's been an interesting project (and a nice excuse to write Haskell) and it will be useful for others.

Unusually (for me) I've not been reading fiction on this trip: I took with me Human Compatible by Prof Stuart Russell: discussing how to solve the problem of controlling a future Artificial Intelligence. I've largely avoided the LLM hype cycle we're suffering through at the moment, and I have several big concerns about it (moral, legal, etc.), and felt it was time to try and make my concerns more well-formed and test them. This book has been a big help in doing so, although it doesn't touch on the issue of copyright, which is something I am particularly interested in at the moment.

21 February, 2026 07:00PM

hackergotchi for Thomas Goirand

Thomas Goirand

Seamlessly upgrading a production OpenStack cluster in 4 hours : with 2k lines shell script


tl;dr:

To the question: “what does it take to upgrade OpenStack”, my personal answer is: less than 2K lines of dash script. I’ll here describe its internals, and why I believe it is the correct solution.

Why writing this blog post

During FOSSDEM 2024, I was asked “how to you handle upgrades”. I answered with a big smile and a short “with a very small shell script” as I couldn’t explain in 2 minutes how it was done. Just saying “it is great this way” doesn’t help giving readers enough hints to be trusted. Why and how did I do it the right way ? This blog post is an attempt to reply better to this question more deeply.

Upgrading OpenStack in production

I wrote this script maybe a 2 or 3 years ago. Though I’m only blogging about it today, because … I did such an upgrade in a public cloud in production last Thuesday evening (ie: the first region of the Infomaniak public cloud). I’d say the cluster is moderately large (as of today: about 8K+ VMs running, 83 compute nodes, 12 network nodes, … for a total of 10880 physical CPU cores and 125 TB of RAM if I only count the compute servers). It took “only” 4 hours to do the upgrade (though I already wore some more code to speed this up for the next time…). It went super smooth without a glitch. I mostly just sat, reading the script output… and went to bed once it finished running. The next day, all my colleagues at Infomaniak were nicely congratulating me that it went that smooth (a big thanks to all of you who did). I couldn’t dream of an upgrade that smooth! :)

Still not impressed? Boring read? Yeah… let’s dive into more technical details.

Intention behind the implementation

My script isn’t perfect. I wont ever pretend it is. But at least, it does minimize down time of every OpenStack service. It also is a “by the book” implementation of what’s written in the OpenStack doc, following every upstream advice. As a result, it is fully seamless for some OpenStack services, and as HA as OpenStack can be for others. The upgrade process is of course idempotent and can be re-run in case of failure. Here’s why.

General idea

My upgrade script does thing in a certain order, respecting what is documented about upgrades in the OpenStack documentation. It basically does:

  • Upgrade all dependency
  • Upgrade all services one by one, in all the cluster

Installing dependencies

The first thing the upgrade script does is:

  • disable puppet on all nodes of the cluster
  • switch the APT repository
  • apt-get update on all nodes
  • install library dependency on all nodes

For this last thing, a static list of all needed dependency upgrade is maintained between each release of OpenStack, and for each type of nodes. Then for all packages in this list, the script checks with dpkg-query that the package is really installed, and with apt-cache policy that it really is going to be upgraded (Maybe there’s an easier way to do this?). This way, no package is marked as manually installed by mistake during the upgrade process. This ensure that “apt-get –purge autoremove” really does what it should, and that the script is really idempotent.

The idea then, is that once all dependencies are installed, upgrading and restarting leaf packages (ie: OpenStack services like Nova, Glance, Cinder, etc.) is very fast, because the apt-get command doesn’t need to install all dependencies. So at this point, doing “apt-get install python3-cinder” for example (which will also, thanks to dependencies, upgrade cinder-api and cinder-scheduler, if it’s in a controller node) only takes a few seconds. This principle applies to all nodes (controller nodes, network nodes, compute nodes, etc.), which helps a lot speeding-up the upgrade and reduce unavailability.

hapc

At its core, the oci-cluster-upgrade-openstack-release script uses haproxy-cmd (ie: /usr/bin/hapc) to drain each API server to-be-upgraded from haproxy. Hapc is a simple Python wrapper around the haproxy admin socket: it sends command to it with an easy to understand CLI. So it is possible to reliably upgrade one API service only after it’s drained away. Draining means one just wait for the last query to finish and the client to disconnect from http before giving the backend server some more queries. If you do not know hapc / haproxy-cmd, I recommend trying it: it’s going to be hard for you to stop using it once you tested it. Its bash-completion script makes it VERY easy to use, and it is helpful in production. But not only: it is also nice to have when writing this type of upgrade script. Let’s dive into haproxy-cmd.

Example on how to use haproxy-cmd

Let me show you. First, ssh one of the 3 controller and search where the virtual IP (VIP) is located with “crm resource locate openstack-api-vip” or with a (more simple) “crm status”. Let’s ssh that server who got the VIP, and now, let’s drain it away from haproxy.

$ hapc list-backends
$ hapc drain-server --backend glancebe --server cl1-controller-1.infomaniak.ch --verbose --wait --timeout 50
$ apt-get install glance-api
$ hapc enable-server --backend glancebe --server cl1-controller-1.infomaniak.ch

Upgrading the control plane

My upgrade script leverages hapc just like above. For each OpenStack project, it’s done in this order on the first node holding the VIP:

  • “hapc drain-server” of the API, so haproxy gracefully stops querying it
  • stop all services on that node (including non-API services): stop, disable and mask with systemd.
  • upgrade that service Python code. For example: “apt-get install python3-nova”, which also will pull nova-api, nova-conductor, nova-novncprox, etc. but services wont start automatically as they’ve been stoped + disabled + masked on the previous bullet point.
  • perform the db_sync so that the db is up-to-date [1]
  • start all services (unmask, enable and start with systemd)
  • re-enable the API backend with “hapc enable-server”

Starting at [1], the risk is that other nodes may have a new version of the database schema, but an old version of the code that isn’t compatible with it. But it doesn’t take long, because the next step is to take care of the other (usually 2) nodes of the OpenStack control plane:

  • “hapc drain-server” of the API of the other 2 controllers
  • stop of all services on these 2 controllers [2]
  • upgrade of the package
  • start of all services

So while there’s technically zero down time, still some issues between [1] and [2] above may happen because of the new DB schema and the old code (both for API and other services) are up and running at the same time. It is however supposed to be rare cases (some OpenStack project don’t even have db change between some OpenStack releases, and it often continues to work on most queries with the upgraded db), and the cluster will be like that for a very short time, so that’s fine, and better than an full API down time.

Satellite services

Then there’s satellite services, that needs to be upgraded. Like Neutron, Nova, Cinder. Nova is the least offender as it has all the code to rewrite Json object schema on-the-fly so that it continues to work during an upgrade. Though it’s a known issue that Cinder doesn’t have the feature (last time I checked), and it’s also probably the same for Neutron (maybe recent-ish versions of OpenStack do use oslo.versionnedobjects ?). Anyways, upgrade on these nodes are done just right after the control plane for each service.

Parallelism and upgrade timings

As we’re dealing with potentially hundreds of nodes per cluster, a lot of operations are performed in parallel. I choose to simply use the & shell thingy with some “wait” shell stuff so that not too many jobs are done in parallel. For example, when disabling SSH on all nodes, this is done 24 nodes at a time. Which is fine. And the number of nodes is all depending on the type of thing that’s being done. For example, while it’s perfectly OK to disable puppet on 24 nodes at the same time, but it is not OK to do that with Neutron services. In fact, each time a Neutron agent is restarted, the script explicitly waits for 30 seconds. This conveniently avoids a hailstorm of messages in RabbitMQ, and neutron-rpc-server to become too busy. All of these waiting are necessary, and this is one of the reasons why can sometimes take that long to upgrade a (moderately big) cluster.

Not using config management tooling

Some of my colleagues would have prefer that I used something like Ansible. Whever, there’s no reason to use such tool if the idea is just to perform some shell script commands on every servers. It is a way more efficient (in terms of programming) to just use bash / dash to do the work. And if you want my point of view about Ansible: using yaml for doing such programming would be crasy. Yaml is simply not adapted for a job where if, case, and loops are needed. I am well aware that Ansible has workarounds and it could be done, but it wasn’t my choice.

21 February, 2026 12:44AM by Goirand Thomas

February 20, 2026

hackergotchi for Bits from Debian

Bits from Debian

Proxmox Platinum Sponsor of DebConf26

proxmox-logo

We are pleased to announce that Proxmox has committed to sponsor DebConf26 as a Platinum Sponsor.

Proxmox develops powerful, yet easy-to-use open-source server solutions. The comprehensive open-source ecosystem is designed to manage divers IT landscapes, from single servers to large-scale distributed data centers. Our unified platform integrates server virtualization, easy backup, and rock-solid email security ensuring seamless interoperability across the entire portfolio. With the Proxmox Datacenter Manager, the ecosystem also offers a "single pane of glass" for centralized management across different locations. 

Since 2005, all Proxmox solutions have been built on the rock-solid Debian platform. We are proud to return to DebConf26 as a sponsor because the Debian community provides the foundation that makes our work possible. We believe in keeping IT simple, open, and under your control.

Thank you very much, Proxmox, for your support of DebConf26!

Become a sponsor too!

DebConf26 will take place from 20th to July 25th 2026 in Santa Fe, Argentina, and will be preceded by DebCamp, from 13th to 19th July 2026.

DebConf26 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf26 website at https://debconf26.debconf.org/sponsors/become-a-sponsor/.

20 February, 2026 05:26PM by Leonardo Martínez, Santiago Ruano Rincón

February 19, 2026

hackergotchi for Daniel Pocock

Daniel Pocock

Bitcoin: government engagement contradictions

In 2025, US President Donald Trump signed an executive order creating the US federal Bitcoin reserve.

When cryptocurrencies are seized from criminals, they are placed in the reserve. If the reserve did not exist, the authorities could sell seized Bitcoins on a Bitcoin exchange.

From a market perspective, the government's willingness to hold Bitcoins rather than selling them immediately has various implications.

If the government was selling them, this would put downward pressure on the prices.

Instead, holding them gives them more legitimacy and may entice other people to purchase Bitcoins.

Governments with a strong attitude against climate change would be far less likely to hold Bitcoins due to the associated electricity waste.

Ultimately, some Bitcoin founders and proponents argued for Bitcoin to exist as a currency out of government control.

When I hear Bitcoin proponents talking about their enthusiasm for government participation, it raises questions about the philosophy of Bitcoin. Has the philosophy changed over time, is it viable for different participants to have different philosophies or is the question of philosophy irrelevant?

For those who like the philosophy of having some wealth in an asset outside legal and political control, their needs are already well met by gold and silver bullion. The notion that Bitcoin would be a convenient digital replacement for physical metal ownership is a dangerous fantasy. The very people who could benefit most from buying gold and silver bullion are giving their money to the crypto barons. The crypto barons take that money and buy more gold and silver for themselves.

A small government holding some Bitcoin is not the same as a government controlling the Bitcoin market.

Nonetheless, as a large government acquires more and more Bitcoins they will have a bigger incentive to control the technology, for better or worse. Some US government agencies and some of the largest US tech companies may have the resources to outsmart the Bitcoin industry. Maybe they already have done so. Maybe they even created it in the first place.

In comparison, if the government put the same amount of money into equities, buying shares in small businesses, this funds innovation and new ideas. New companies create jobs and the equities eventually pay dividends to the investors. Bitcoins don't create jobs and they don't pay dividends.

In fact, I think the real motivation for those people who want the government to hold Bitcoins is the realization that the process of building a reserve takes more Bitcoins out of circulation. This, in turn, pushes up the price. Eventually, the private owners of Bitcoins who promoted that policy, having already acquired their Bitcoins at cheaper prices before the federal government, will be able to sell their personal holdings of Bitcoin for a profit. In other words, over time, the system transfers wealth from the public purse to the private purses of the people who got in first. The public do not appear to gain any benefit from that transaction.

Continue reading the inconvenient truth about cryptocurrency.

The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .

19 February, 2026 08:00AM

February 18, 2026

hackergotchi for Clint Adams

Clint Adams

Holger says

sq network keyserver search $id ; sq cert export --cert=$id > $id.asc

Posted on 2026-02-18
Tags:

18 February, 2026 10:59PM

Antoine Beaupré

net-tools to iproute cheat sheet

This is also known as: "ifconfig is not installed by default anymore, how do I do this only with the ip command?"

I have been slowly training my brain to use the new commands but I sometimes forget some. So, here's a couple of equivalence from the old package to net-tools the new iproute2, about 10 years late:

net-tools iproute2 shorter form what it does
arp -an ip neighbor ip n
ifconfig ip address ip a show current IP address
ifconfig ip link ip l show link stats (up/down/packet counts)
route ip route ip r show or modify the routing table
route add default GATEWAY ip route add default via GATEWAY ip r a default via GATEWAY add default route to GATEWAY
route del ROUTE ip route del ROUTE ip r d ROUTE remove ROUTE (e.g. default)
netstat -anpe ss --all --numeric --processes --extended ss -anpe list listening processes, less pretty

Another trick

Also note that I often alias ip to ip -br -c as it provides a much prettier output.

Compare, before:

anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
    altname wlp166s0
    altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
       valid_lft 40699sec preferred_lft 40699sec

After:

anarcat@angela:~> ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
wlan0            DOWN           
virbr0           DOWN           192.168.122.1/24 
eth0             UP             192.168.0.108/24 

I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.

Also imagine pretty colors above.

Finally, I don't have a cheat sheet for iw vs iwconfig (from wireless-tools) yet. I just use NetworkManager now and rarely have to mess with wireless interfaces directly.

Background and history

For context, there are traditionally two ways of configuring the network in Linux:

  • the old way, with commands like ifconfig, arp, route and netstat, those are part of the net-tools package
  • the new way, mostly (but not entirely!) wrapped in a single ip command, that is the iproute2 package

It seems like the latter was made "important" in Debian in 2008, which means every release since Debian 5 "lenny" (!) has featured the ip command.

The former net-tools package was demoted in December 2016 which means every release since Debian 9 "stretch" ships without an ifconfig command unless explicitly requested. Note that this was mentioned in the release notes in a similar (but, IMHO, less useful) table.

(Technically, the net-tools Debian package source still indicates it is Priority: important but that's a bug I have just filed.)

Finally, and perhaps more importantly, the name iproute is hilarious if you are a bilingual french speaker: it can be read as "I proute" which can be interpreted as "I fart" as "prout!" is the sound a fart makes. The fact that it's called iproute2 makes it only more hilarious.

18 February, 2026 04:30PM

hackergotchi for Thomas Lange

Thomas Lange

42.000 FAI.me jobs created

The FAI.me service has reached another milestone:

The 42.000th job was submitted via the web interface since the beginning of this service in 2017.

The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.

The next job can be yours!

P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org

About FAI.me

FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.

Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.

18 February, 2026 02:55PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.1.0 on CRAN: Easier Calendar Switching

The eighteenth release of the qlcal package arrivied at CRAN today. There has been no calendar update in QuantLib 1.41 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"), 
         \(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
             TRUE              TRUE              TRUE 
> 

to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
         \(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
            FALSE             FALSE              TRUE 
> 

The full details from NEWS.Rd follow.

Changes in version 0.1.0 (2026-02-18)

  • Invalid calendars return id ‘TARGET’ now

  • Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used

  • For several functions a missing date object now implies computation on the current date, e.g. isBusinessDay()

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Edited 2026-02-21 to correct a minor earlier error: it referenced a QuantLib 1.42 release which does not (yet) exist.

18 February, 2026 01:21PM

February 17, 2026

Russell Coker

February 16, 2026

Antoine Beaupré

Keeping track of decisions using the ADR model

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

  • Context: What is the issue that we're seeing that is motivating this decision or change?

  • Decision: What is the change that we're proposing and/or doing?

  • Consequences: What becomes easier or more difficult to do because of this change?

  • More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.

  • Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

  • "informed" users (previously "affected users")
  • "consulted" (previously undefined!)
  • "decision maker" (instead of the vague "approval")

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

16 February, 2026 08:21PM

hackergotchi for Philipp Kern

Philipp Kern

What is happening with this "connection verification"?

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion 

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

16 February, 2026 07:55PM by Philipp Kern (noreply@blogger.com)

Antoine Beaupré

Kernel-only network configuration on Linux

What if I told you there is a way to configure the network on any Linux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and a boot loader (no systemd-networkd, ifupdown, NetworkManager, nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is still totally worth your time.

Known options in Debian

People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:

At this point, I feel ifupdown is on its way out, possibly replaced by systemd-networkd. NetworkManager already manages most desktop configurations.

A "new" network configuration system

The method is this:

  • ip= on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older)

So by "new" I mean "new to me". This option is really old. The nfsroot.txt where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.

The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.

What are you doing.

The trick is to add an ip= parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the useless ones:

  • <client-ip>: IP address of the server
  • <gw-ip>: address of the gateway
  • <netmask>: netmask, in quad notation
  • <device>: interface name, if multiple available
  • <autoconf>: how to configure the interface, namely:
    • off or none: no autoconfiguration (static)
    • on or any: use any protocol (default)
    • dhcp, essentially like on for all intents and purposes
  • <dns0-ip>, <dns1-ip>: IP address of primary and secondary name servers, exported to /proc/net/pnp, can by symlinked to /etc/resolv.conf

We're ignoring the options:

  • <server-ip>: IP address of the NFS server, exported to /proc/net/pnp
  • <hostnname>: Name of the client, typically sent over the DHCP requests, which may lead to a DNS record to be created in some networks
  • <ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by the kernel

Note that the Red Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (although server-id is weird), and the autoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file /etc/default/grub (ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

For systemd-boot UKI setups, it's simpler: just add the setting to the /etc/kernel/cmdline file. Don't forget to include anything that's non-default from /proc/cmdline.

This assumes that is the Cmdline=@ setting in /etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:

  • Arch (11 options, mostly /etc/default/grub, /boot/loader/entries/arch.conf for systemd-boot or /etc/kernel/cmdline for UKI)
  • Fedora (mostly /etc/default/grub, may be more RHEL mentions grubby, possibly some systemd-boot things here as well)
  • Gentoo (5 options, mostly /etc/default/grub, /efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot, or /etc/kernel/install.d/95-uki-with-custom-opts.install)

It's interesting that /etc/default/grub is consistent across all distributions above, while the systemd-boot setups are all over the place (except for the UKI case), while I would have expected those be more standard than GRUB.

dropbear-initramfs

If dropbear-initramfs is setup, it already requires you to have such a configuration, and it might not work out of the box.

This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).

To fix this, you need to disable that "feature":

IFDOWN="none"

This will keep dropbear-initramfs from disabling the configured interface.

Why?

Traditionally, I've always setup my servers with ifupdown on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using systemd-networkd because ifupdown has ... issues, particularly with reloading network configurations. ifupdown is a old hack, feels like legacy, and is Debian-specific.

Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.

I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.

  2. This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.

  3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.

  4. It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)

  6. It will not automatically reconfigure the interface on link changes, but ifupdown does not either.

  7. It will not write /etc/resolv.conf for you but the dns0-ip and dns1-ip do end up in /proc/net/pnp which has a compatible syntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
    
  8. I have not really tested this at scale: only a single, test server at home.

Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.

Cleanup

Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note that ifupdown (and probably others) leave stray files in (e.g.) /etc/network which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty.

Credits

This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!

16 February, 2026 02:40PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Why do people participate in similar online communities?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

16 February, 2026 03:13AM by Benjamin Mako Hill

February 15, 2026

hackergotchi for Daniel Pocock

Daniel Pocock

Iran conflict: Bitcoin could be left behind by traditional gold and silver

When the price of gold and silver bullion recently shot up, many leaders in the Bitcoin community expressed surprise that Bitcoin prices didn't go up too. In fact, Bitcoin crashed.

The last time the Straight of Hormuz was closed, the prices of gold and silver bullion doubled. Bitcoin didn't exist back in 1972 so nobody can say what will happen to Bitcoin prices if the same circumstances were repeated.

The USS Gerald R. Ford was christened on 9 November 2013 and incidentally, President Trump was elected for the first time on 9 November 2016.

News reports tell us that President Trump has asked the Ford to leave the Caribbean and travel to Iran. Don't be surprised if the USS Gerald R. Ford never reaches the gulf: it may show up next week in Cuba or Greenland instead.

When President Trump was elected back in 2016, I published a blog about two movies that anticipate the Trump administration. One of them concerns conflict in the Straight of Hormuz and there has never been a better time to watch it.

If Bitcoin prices do take a big fall so quickly after the last big drop then it may permanently tarnish the reputation of Bitcoin and other cryptocurrencies too. If new users become scared of buying Bitcoins then the existing users will find it harder to convert their coins back to cash when necessary.

Continue reading the inconvenient truth about cryptocurrency.

The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .

15 February, 2026 10:00PM

Ian Jackson

Adopting tag2upload and modernising your Debian packaging

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See the Day-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgit do solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start with the wiki page and git-debpush(1) (ideally from forky aka testing).

You don’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.

  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.

We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling like quilt and gbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.

This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).

Examples of complex packages using this approach include src:xen and src:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.

Edit debian/watch to contain something like this:

version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We need some “upstream tarball” for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existing debian/source/options and/or debian/source/local-options.

git merge

Change debian/source/format to 1.0. Add debian/source/options containing -sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.

git-debrebase

Ensure that debian/source/format contains 3.0 (quilt).

Now you are ready to do a local test build.

Sort out the documentation and metadata

Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note to debian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Create debian/salsa-ci.yml containing

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.

git-debrebase

Add to debian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig

.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare

build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare

variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Use git-rebase to squash/edit/combine/reorder commits.

git-debrebase

Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .
dpkg-buildpackage -uc -b

Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in the debian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you’re running it on your MR branch.

The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an upstream remote and that you’re basing your work on their main branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)

Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)

dch -r
git commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deborig
git-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuild
dgit push-built
rationale

You don’t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'

git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sid
git diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look at debian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sid
git diff master...dgit/dgit/sid

If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg

And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).

  • tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload is linked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual in git-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).



comment count unavailable comments

15 February, 2026 01:31PM

February 14, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

A286874(15) >= 42

The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:

000000000011111
000000011100011
000000101101100
000001010110100
000001101010001
000001110001010
000010011011000
000100100110010
000110010000110
000110100001001
000111001100000
001000110000101
001010000110001
001010101000010
001011000001100
001100001010100
001100010101000
001101000000011
010001000101001
010010001000101
010010110100000
010011000010010
010100001001010
010100010010001
010101100000100
011000000100110
011000100011000
011001011000000
100001001000110
100010000101010
100010100010100
100011010000001
100100000100101
100100111000000
100101000011000
101000001001001
101000010010010
101001100100000
110000001110000
110000010001100
110000100000011
111110000000000

This shows that A286874 a(15) >= 42.

If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.

14 February, 2026 06:56PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf 26 Registration and Call for Proposals are open

Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.

The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.

The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.

As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.

The last day to register with guaranteed swag is June 14th.

We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.

The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.

Call for proposals

The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.

The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.

Become a sponsor

DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.

See you in Santa Fe,

The DebConf 26 Team

14 February, 2026 12:15PM by Carlos Henrique Lima Melara, Santiago Ruano Rincón

February 13, 2026

hackergotchi for Erich Schubert

Erich Schubert

Dogfood Generative AI

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable.

The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to use data-nosnippet and cripple the snippet preview in Google. The “AI” browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site.

Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.

If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.

Partially because of GenAI, StackOverflow is pretty much dead – which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch).

Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI.

Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.

However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum).

Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire.

Dogfood the AI

Let’s dogfood the AI. Here’s an outline:

  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI…

Here is an example prompt that you can use:

You are a university educator, preparing homework assignments in debugging.
The programming language used is {lang}.
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG

Generate a {lang} implementation (with bugs) of: {n} ({desc})

Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.

If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.

On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don’t think technology is the answere here, but human networks of trust.

13 February, 2026 10:29AM by Erich Schubert

February 12, 2026

hackergotchi for Daniel Pocock

Daniel Pocock

Jeffrey Epstein crypto disclosure: uncanny timing, Bitcoin demise, pump-and-dump, ponzi schemes

US Justice Department officials have recently released another three million Jeffrey Epstein documents. One of those was an email discussion about a cryptocurrency investment. Epstein immediately suspected it was a pump-and-dump scheme and he felt that his involvement would be a bad idea due to "questionable ethics".

The fact this email was exposed now is another unlucky coincidence for Bitcoin. The original cryptocurrency has been on the ropes after suddenly losing half of its value. News reports talk of a death spiral. Anybody who is about to try cryptocurrency for the first time is going to take one look at the very steep fall on the charts and they are going to try something else instead, like the rush for a traditional safe haven: gold and silver bullion.

Bitcoins in circulation are allegedly worth $2.2 trillion. That figure is only valid in the hypothetical scenario where every Bitcoin could be simultaneously converted to US dollars. In practice, markets don't work that way.

We can roughly estimate the amount of electrical energy used to produce the Bitcoins in circulation. From there, we would anticipate that Bitcoin miners have sold enough Bitcoins to pay their bills and keep mining more.

In practice, it looks like some of the smartest investors were selling their Bitcoins while the price was over $100,000 and they used that money to buy gold and silver bullion before the prices of those metals started to gain momentum.

Where did that money come from though? New investors who decided to try cryptocurrency for the first time. Those are the people holding on to their Bitcoins right now hoping the price will go back up again.

Bitcoin, gold, silver, ponzi scheme, pump dump, Jeffrey Epstein

 

Jason Calacanis, open source, radicals, Gavin Andresen, Amir Taaki, Bitcoin, gold, silver, ponzi scheme, pump dump, Jeffrey Epstein

 

Jeremy Rubin, Blockstream, Bitcoin, gold, silver, ponzi scheme, pump dump, Jeffrey Epstein

 

Continue reading the inconvenient truth about cryptocurrency.

The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .

12 February, 2026 07:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 ‘to be’), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on std::format. So for now, we turned this off and remain with fmt::format from the fmt library while we investigate further.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.27 (2026-02-11)

  • Under C++20 or later, keep relying on fmt::format until issues experienced using std::format can be identified and resolved

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 February, 2026 01:59PM

hackergotchi for Daniel Pocock

Daniel Pocock

Relationships evidence: Tiago, Tassia, Thais, Antonio & Debian favoritism, nepotism

One of the consequences of the Debian pregnancy cluster is that the earliest children are now growing up and some of them want to participate in Debianism too. Or at the very least, they want to put Debian on their CV and use the trademark as a tool to help them take credit for the substantial work done by real developers and innovators who bootstrapped the free software ecosystem.

To see how it will work for those children, we can look at existing cases with nieces, nephews, cousins and girlfriends.

I can't imagine any other organization where we would be talking about other peoples' family members like this. In Debianism, however, people have been attacking my family for a number of years now. They publicised rumours about interns seeking a relationship with me in Google Summer of Code and Outreachy. As long as those rumours are sustained, it is only fair that I can publish information to defend the honor of my family and my former interns. In other words, it is necessary to look at every relationship in the group. That includes romantic relationships and it also includes cases where somebody's child or niece/nephew appears to be jumping the queue.

Phil Wyett recently complained that he had been waiting seven years for his Debian Developer application to be completed. He became disillusioned and stopped contributing altogether.

When one of the Debianist power couples introduces their child to Debianism, will their child be left waiting seven years in the new maintainer queue like Phil Wyett?

When travel bursaries are awarded for attending DebConf, will children, especially females, automatically receive the diversity money?

In 2024, for Ada Lovelace Day, Tassia spoke about being a Debian wife:

Most of that group became close friends; I've married one of them, ... we've had our own child and had the pleasure of introducing him to the community; we've mourned the loss of friends and healed together

March 2014: look at the schedule for the Women's MiniDebConf in Barcelona. We can see Tiago and his wife Tassia both came from Canada to Barcelona.

Vicky, Cristina, Barcelona, Debian, Women, MiniDebConf

 

Bartosz Fenski realized what was happening and resigned:

To: <debian-private@lists.debian.org>

I wasn’t too active recently but with the latest activity of our
leader and official support for women-mini-debconf I want to retire
from the project. I don’t want to be part of this anymore. Please
hereby approve my retirenment statement. See you in project where
everyone will be welcome. Not only women.

Bartosz is not sexist and he is not complaining about women. Everybody knows there is a culture of nepotism and favoritism. Certain people appeared to be asking for Debian funds to go to Barcelona with their partners.

A few months after Barcelona, Tássia Camões Araújo was given the title of non-Developing Debian Developer and then immediately appointed to the team of three DebConf chairs.

Now we fast forward ten years. On 22 October 2024 there was an event Online MiniDebConf Brasil 2024. They had a session, in Portuguese, comparing online and in-person events. There is a moderator and three speakers for the session. The schedule gives their names as Paulo Santana, Antonio Terceiro, Thais Araujo, Tássia Camões Araújo.

Paulo Henrique de Lima Santana, Antonio Terceiro, Thaís Rebouças de Araujo, Tássia Camões Araújo, Debian, DebConf, Brazil

 

Paulo introduces the other three participants. Using Youtube auto-translation, we see what he is saying: the three speakers are all related to each other. It is not clear if Thais is the sister, cousin or niece of Tássia. Her last name suggests she is a cousin or a niece. In fact, the word nepotism is derived from the word nephew, the male form of niece.

Paulo Henrique de Lima Santana, Antonio Terceiro, Thaís Rebouças de Araujo, Tássia Camões Araújo, Debian, DebConf, Brazil

 

Remember, Tássia had previously served as one of the DebConf chairs and Antonio Terceiro was appointed to the DebConf committee in 2019. Paulo has told us that Thais Rebouças de Araujo is related to both of those people who have influence over DebConf opportunities.

Remember, in 2019, DebConf19 was held in Brazil. A large group of women from Albania and Kosovo were brought to the conference. None of the women were asked to pay for their own flights or accommodation. Everything they needed was fully paid for them. At the conference dinner, four of those women were seated at the same table as the former Debian leader Chris Lamb. A few weeks after the dinner, the Albanian woman seated closest to Lamb was awarded an Outreachy internship. Evidence: a lovestruck leader?

Chris Lamb, Anisa Kuci, Debconf19

 

Between 2019 and 2022, they spread numerous rumours accusing my last Google Summer of Code intern of having a relationship with me. The woman was half my age and already married. In this video, she tells us how they tried to manipulate her.

Here are some of the examples of the Debian/Zizian mob, people I never met, spreading rumours:

Ariadne Conill

 

and the following year it came up again:

Ariadne Conill

 

In 2022, the misfits spent over $120,000 on legal fees to insist there had been wrongdoing, despite the woman telling us clearly I was a great mentor and she didn't want to be part of a group that behaves worse than a student union.

Debian, legal expenses

 

On 12 September 2023, Thais Rebouças de Araujo used the name Thais Rebouças in the schedule for DebConf23. Without her full name, we can not tell she is related to Tassia.

Lucas Kanashiro, Sergio de Almeida Cipriano Junior, Aquila Macedo, Carlos Henrique Lima Melara, Thais Rebouças de Araujo, Arthur Diniz, Leonardo Gomes

 

Lucas Kanashiro, Sergio de Almeida Cipriano Junior, Aquila Macedo, Carlos Henrique Lima Melara, Thais Rebouças de Araujo, Arthur Diniz, Leonardo Gomes

 

On 15 September 2023, Thaís Rebouças de Araujo, using the name Thais R. Araujo asks to join the Debian Python team and help maintain packages Robber, Delta and Pytest-executable.

She puts Debian Python on her LinkedIn profile:

Thaís Rebouças de Araujo, LinkedIn, Debian

 

Remember when Vincent Danjean joined Debian? He told debian-private (leaked):

When I did the NM process, I knew that I had to give some personal information so that my AM knew me better. I also knew that these mails will be read by FD and DAM. But I knew they will not be public even between other DD. So, I wrote some things about my personal life. I also wrote opinion on other DD (my AM ask me to react to some public claim from theses DD). I do not want that these mails become public (even between DD and even more external people) because I do not remember what I wrote exactly, I do not want to check/rewrite/filter public/private parts, ... These mails have never be intended to be read by people other than the ones that already read them.

Did anybody ask Thaís Rebouças de Araujo to provide collateral? Or was she immune from such requests because of her relationships with other members of the group?

On 7 June 2024, Thaís Rebouças de Araujo registered a business under her own name, registration number 55,442,988/0001-30 with share capital of BRL 2,500 (Source: business registry). Her subsequent conference appearances all appear to be beneficial for her business.

55,442,988 Thais Reboucas de Araujo

 

55,442,988 Thais Reboucas de Araujo

 

55,442,988 Thais Reboucas de Araujo

 

When I registered a business in April 2021, the rogue Debianists immediately started attacking my family and spreading rumours. Remember when they attacked Dr Norbert Preining at Christmas 2018? This was the "Call for experiences of Norbert Preining" thread:

Norbert Preining, Ian Jakcson, Wookey, Joerg Jaspert, Jonathan Wiltshire, Debian

 

On 18 July 2024, the GNOME misfits open a topic in their Discourse forum spreading defamation about Sonny Piers. Over 21,200 people have looked at it so far:

Sonny Piers, GNOME

 

On 28 July 2024, Tássia Camões Araújo and Thaís Rebouças de Araujo give a talk together. Once again, in the schedule, we only see the name Thais Rebouças so we don't get any hint they are related. In the beginning of the talk, each of them introduces the other to the audience. They do not mention the family connection.

DebConf101: forget merit, its all about who you know

Tássia Camões Araújo, Thaís Rebouças de Araujo, Debian, DebConf24

 

Tássia Camões Araújo, Thaís Rebouças de Araujo, Debian, DebConf24

 

On 22 October 2024 there was an event Online MiniDebConf Brasil 2024, as described earlier, where they revealed many of these people are related to each other.

On 14 July 2025, at DebConf25 in France, Tássia Camões Araújo and Thaís Rebouças de Araujo repeat the same talk from the previous year, DebConf 101. Therefore, it looks like Tássia Camões Araújo and Thaís Rebouças de Araujo both got funding for a trip to France.

Tássia Camões Araújo, Thaís Rebouças de Araujo, Debian, DebConf24

 

In December 2025 Thaís Rebouças de Araujo graduated. The plan worked and an employer has hired her even before graduating. She is now at a company called Place Tecnologia e Inovacao S.A:

Thaís Rebouças de Araujo, LinkedIn, Debian

 

What we see here is quite tragic for everybody else in the history of Debian and free software. Tássia Camões Araújo is the wife of Tiago so they both get to travel together. It is not clear if they asked for and received Debian funds on every trip. Now Thaís Rebouças de Araujo has arrived, she has given a talk with her aunt(?) and she has had at least three long distance trips from Brazil, visiting India in 2023, South Korea in 2024 and France in 2025.

Remember, Abraham Raji did a lot of work organising DebConf23 in India. Volunteers were asked to contribute their own money to the cost of the day trip. Raji chose not to pay and he was left out of the paid kayak excursion, he was left alone to swim unsupervised and he drowned, an avoidable death.

The cost of the flights for Tássia Camões Araújo's niece to come from Brazil to India for DebConf23 is more than the cost of providing a free day trip for everybody.

Bringing somebody's niece from Brazil to India was considered a more important expense than keeping the group together and safe during the day trip.

For better or worse, this looks like the situation the Catholic church was trying to avoid when they decided it was better for priests to be celibate.

In a local community organization, for example, an amateur sporting club, it is quite normal to have multiple people from the same families participating because they are all living in the same district.

Debianists have claimed to be a merit-based organization. The Debian Social Contract promises transparency. People working remotely can't easily tell when money is given to somebody's girlfriend or niece. In this case, like many others, the woman is using different permutations of her name. Some women have been using aliases. People who look at the conference schedule or watch it online are kept in the dark about many of these relationships.

Applicants for the Outreachy internships were asked to spend time doing unpaid coding tasks to compete on merit. How do those women feel when they see somebody's niece was given travel and speaking opportunities at every conference?

Every other applicant for funding is expected to share the public links to their work. Here are the links we found for Thaís Rebouças de Araujo:

It is interesting to compare the contributions of Thaís Rebouças de Araujo to people of the same age in the glory days of Debian volunteering. People like Shaya Potter, Joel "Espy" Klecker and Chris Rutter. The early leaks of debian-private include a lot of evidence about how people used to join and contribute spontaneously before money arrived and spoiled the community.

In most organisations, I can't imagine people would ever be pushed to write blogs and emails documenting conflicts of interest like this. At the time when my father died, the Debian/Zizian mob started their obsession with attacking my family and spreading rumours that I was having a relationship with an intern in Google Summer of Code. They even tried to trick the intern to make a complaint about me. She refused and she didn't want to be associated with these people who use women to start rumours.

The snobby set use the Debian trademark and donations to denounce and rub out people who have done decades of real open source development work. Then the same cyberbullies use the trademark, the money, speaking opportunities and whatever else to help their romantic partners, children and nieces/nephews to build a public profile for themselves.

The hypocrisy is extraordinary. Thousands of dollars of diversity funds were spent taking Thaís Rebouças de Araujo and other personal friends to conferences so they can put Debian on their CV and get an advantage in the job market. Yet people who do enormous amounts of real work like Dr Norbert Preining, Sonny Piers, Richard Stallman and I have all been subject to vicious and dishonest attacks on our reputations. Then these imposters come in and take our place. When the imposters appear on a stage at DebConf or when they call themselves a Debian Developer, they are taking credit for the work that everybody else has done in the past. They are standing on the shoulders of giants.

They are shaming and humiliating real developers for making the slightest mistakes with woke language and pronouns and then they put somebody's niece up on a pedastal and pretend she is the next Linus Torvalds.

In the talk at DebConf23, the camera pans around the room. We can see the room is nearly empty. Many talks were nearly empty. All the interesting people like Richard Stallman, Linus Torvalds and I are subject to censorship. People don't expect anybody to say anything interesting so they don't bother going to DebConf.

Lucas Kanashiro, Sergio de Almeida Cipriano Junior, Aquila Macedo, Carlos Henrique Lima Melara, Thais Rebouças de Araujo, Arthur Diniz, Leonardo Gomes

 

Lucas Kanashiro, Sergio de Almeida Cipriano Junior, Aquila Macedo, Carlos Henrique Lima Melara, Thais Rebouças de Araujo, Arthur Diniz, Leonardo Gomes

 

Joel "Espy" Klecker's favorite expression is timeless. R.I.P. Espy (1979-2000).

Joel Espy Klecker, surrounded by morons, t-shirt

 

Please see more about the Debian pregnancy cluster.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

12 February, 2026 12:00PM

Debian pregnancy cluster: why it is public interest

In most community and hobbyist organizations, if you have some volunteers who are romantically involved with each other then their romances and their children are a private subject.

In many places and in healthy organizations, privacy is the default but privacy is not an absolute right either. One or more public interest situations may arise and when they do, the public interest may be stronger than the default expectation of privacy.

When women are used as pawns to spread rumours about somebody else, I feel this is a strong reason to investigate the credibility of the woman and all the men in proximity to the situation. The woman may be telling the truth but in a group with lots of dirty politics, there is a big risk of vendettas. To put it bluntly, in some groups, women are offered financial benefits and social status in exchange for sex and gossip. If we determine that a group works like that then the cost of verifying the truth of any rumour is extremely high. Unless somebody has died or been seriously injured, it becomes better to simply ignore accusations in such a group. If women want to be believed, they can begin by leaving such groups. As the jurist Blackstone put it so well:

"it is better that ten guilty persons escape than that one innocent suffer."

The Debian pregnancy cluster blogs show us that most of the so-called "Debian women" have a romantic connection to somebody else in the group and a lot of these romances go all the way to marriage and pregnancy.

One way we can see that women lie in Debian is by looking at how they stole the word "diversity" from women at large. Diversity, in the honest sense of the word, means bringing new people to each conference. In Debianism, we can see the "diversity" money goes to the same women each year. More significantly, those are women who people already had children with. They know these women intimately, therefore, it is not diversity. Therefore, if those payments are not for real, honest diversity then this money is a benefit going to some developers and not others. Using the word "diversity" to describe these payments to their girlfriends and wives is proof that they routinely and deliberately lie to everybody else.

If anybody dares to point out the truth about such a group, they are accused of spreading lies or "harassment". This creates a very sad situation where the only way to prove the truth about the group is to publish a list of the relationships, the couples and the babies. After publishing the first blog in the series, people still had doubts about whether the Debian pregnancy cluster was a genuine phenomena. After looking at the way the list grew, there is no longer any doubt about how the group sees the long term potential of each new female intern.

When people discuss the Catholic abuse crisis, one of the topics that distinguishes it from other cases of institutional abuse is the presence of celibacy. That is, the priests are forbidden from having a spouse. Some activists have argued that allowing the priests to have a spouse would reduce the number of abuse cases.

This argument brings us to an interesting question: why did the Catholic church choose to have celibacy in the first place?

We know the celibacy exists since the early centuries of the church but we don't have any strong written records of the reason for it. If written records do exist they are in the secret archives of the Vatican.

Public reports about the topic speculate that one of the reasons for celibacy is the fear that the wives and children of priests would use the assets of the parish or they would hope to inherit assets when the priest dies. If the priest has no children then any financial resources will be continually re-invested in the parish and the church at large.

Anybody donating time or money to Debian's so-called diversity programs needs to know this: your time and money is helping a group look for future breeding opportunities.

Back to the type of woman who spreads rumours. When they spread rumours about a man, they are also humiliating other women in proximity to that man. Being specific, Ariadne Conill, one of the Debian/Zizian transgender mob, occupied herself/himself spreading rumours that one of my interns in Outreachy or Google Summer of Code had a relationship with me. As it was not true, every woman became a suspect. All those women are half my age so it was a bigger humiliation for them than it was for my family.

Despite the gross wrongdoing of the Debian/Zizian transgender mob, that doesn't automatically justify publishing all the evidence about the Debian pregnancy cluster either. In a lot of organizations, the leader would step in and simply tell people that some Debian/Zizian transgender tricksters have been caught lying again and the rumours are all false. A clear statement from the leader expressing support for the interns and my family would have avoided most of the publications on my blog.

Debian money is being used as bait to attract women for the Outreachy internships. Women who apply for these internships are asked to spend time doing unpaid work to prove their interest in Debianism and free software in general. Women are led to believe that the initial selection process and subsequent selections for travel and speaking opportunities are based on merit. Before women spend their time doing a project without payment for the recruitment process, I feel those women have a right to know whether the promise of a meritocracy is real or whether it is just an underhanded form of matchmaking.

If a woman is selected for an Outreachy internship and attends a free software conference for the first time, what will be her next step in the organization? It is important for women to have access to evidence about other volunteers who were asked to provide collateral. These demands for collateral have happened in both Debianism and in the NXIVM sex cult. The report from Vincent Danjean gives us damning evidence about the system of collateral:

When I did the NM process, I knew that I had to give some personal information so that my AM knew me better. I also knew that these mails will be read by FD and DAM. But I knew they will not be public even between other DD. So, I wrote some things about my personal life. I also wrote opinion on other DD (my AM ask me to react to some public claim from theses DD). I do not want that these mails become public (even between DD and even more external people) because I do not remember what I wrote exactly, I do not want to check/rewrite/filter public/private parts, ... These mails have never be intended to be read by people other than the ones that already read them.

It is useful to look at the women who had pregnancies and ask if they were asked to provide collateral when they first joined the group. Many of them joined at the same time as Vincent Danjean.

Following on from the topic of collateral, we have the topic of punishments. Not every woman fits into this type of group. Women who showed signs of resistance experienced punishments. Everybody has the right to know about this risk before joining the group. See the blackmailing of my former Outreachy intern, Renata D'Avila.

Now we see the same phenomena emerging in parallel in the GNOME misfits. There were a group of Albanian women at DebConf19 in Brazil. The two women sitting closest to former Debian leader Chris Lamb were subsequently employed at GNOME. After the second woman was given a job, one of the men in GNOME was given a punishment by public humiliation.

Kristi Progri, Sonny Piers, Anisa Kuci

 

It is important to remember that the GNOME problems are analogous to the Debian pregnancy cluster because they recruited single, young European (Albanian) women who have had at least some history of romance with other people in the community.

Anisa Kuci, Jonathan Beliën

 

Read the most recent Sonny Piers conspiracy update.

Do you remember any couples from your student union who stayed together and had a baby together? Is Debianism more comparable to a professional organization or is it more like a hobbyist group?

In public, they try to promote the idea that they are some kind of professional group with Code of Conduct gaslighting. As long as they try to fool the public with these snobby statements about people, there is a compelling case for showing it is not such a credible group and the Debian pregnancy cluster helps understand the Code of Conduct gaslighting is nonsense.

One of the biggest reasons to prove the existence of the Debian pregnancy cluster is the dishonesty. The fact that some women were explicitly paid to lie is a strong proof of dishonesty.

Ultimately, I'm sure there are other women outside of these cult groups who really do have real examples of harassment and other crimes from time to time. If those women are to be believed and assisted, we need to pull the rug out from under the women who deliberately lie to get benefits for themselves and their DebConf orgy partners.

More blogs about the Debian pregnancy cluster

Please see the chronological history of how the Debian harassment and abuse culture evolved.

12 February, 2026 12:00PM

February 08, 2026

hackergotchi for Colin Watson

Colin Watson

Free software activity in January 2026

About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian‘s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python packaging

New upstream versions:

Fixes for Python 3.14:

Fixes for pytest 9:

Porting away from the deprecated pkg_resources:

Other build/test failures:

I investigated several more build failures and suggested removing the packages in question:

Other bugs:

Other bits and pieces

Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream.

I contributed an ubuntu-dev-tools patch to stop recommending sudo.

I added forky support to the images used in Salsa CI pipelines.

I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet.

I worked on some lower-priority security updates for OpenSSH.

Code reviews

08 February, 2026 07:30PM by Colin Watson

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

chronometre: A new package (pair) demo for R and Python

Both R and Python make it reasonably easy to work with compiled extensions. But how to access objects in one environment from the other and share state or (non-trivial) objects remains trickier. Recently (and while r-forge was ‘resting’ so we opened GitHub Discussions) a question was asked concerning R and Python object pointer exchange.

This lead to a pretty decent discussion including arrow interchange demos (pretty ideal if dealing with data.frame-alike objects), but once the focus is on more ‘library-specific’ objects from a given (C or C++, say) library it is less clear what to do, or how involved it may get.

R has external pointers, and these make it feasible to instantiate the same object in Python. To demonstrate, I created a pair of (minimal) packages wrapping a lovely (small) class from the excellent spdlog library by Gabi Melman, and more specifically in an adapted-for-R version (to avoid some R CMD check nags) in my RcppSpdlog package. It is essentially a nicer/fancier C++ version of the tic() and tic() timing scheme. When an object is instantiated, it ‘starts the clock’ and when we accessing it later it prints the time elapsed in microsecond resolution. In Modern C++ this takes little more than keeping an internal chrono object.

Which makes for a nice, small, yet specific object to pass to Python. So the R side of the package pair instantiates such an object, and accesses its address. For different reasons, sending a ‘raw’ pointer across does not work so well, but a string with the address printed works fabulously (and is a paradigm used around other packages so we did not invent this). Over on the Python side of the package pair, we then take this string representation and pass it to a little bit of pybind11 code to instantiate a new object. This can of course also expose functionality such as the ‘show time elapsed’ feature, either formatted or just numerically, of interest here.

And that is all that there is! Now this can be done from R as well thanks to reticulate as the demo() (also shown on the package README.md) shows:

> library(chronometre)
> demo("chronometre", ask=FALSE)


        demo(chronometre)
        ---- ~~~~~~~~~~~

> #!/usr/bin/env r
> 
> stopifnot("Demo requires 'reticulate'" = requireNamespace("reticulate", quietly=TRUE))

> stopifnot("Demo requires 'RcppSpdlog'" = requireNamespace("RcppSpdlog", quietly=TRUE))

> stopifnot("Demo requires 'xptr'" = requireNamespace("xptr", quietly=TRUE))

> library(reticulate)

> ## reticulate and Python in general these days really want a venv so we will use one,
> ## the default value is a location used locally; if needed create one
> ## check for existing virtualenv to use, or else set one up
> venvdir <- Sys.getenv("CHRONOMETRE_VENV", "/opt/venv/chronometre")

> if (dir.exists(venvdir)) {
+ >     use_virtualenv(venvdir, required = TRUE)
+ > } else {
+ >     ## create a virtual environment, but make it temporary
+ >     Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())
+ >     virtualenv_create("r-reticulate-env")
+ >     virtualenv_install("r-reticulate-env", packages = c("chronometre"))
+ >     use_virtualenv("r-reticulate-env", required = TRUE)
+ > }


> sw <- RcppSpdlog::get_stopwatch()                   # we use a C++ struct as example

> Sys.sleep(0.5)                                      # imagine doing some code here

> print(sw)                                           # stopwatch shows elapsed time
0.501220 

> xptr::is_xptr(sw)                                   # this is an external pointer in R
[1] TRUE

> xptr::xptr_address(sw)                              # get address, format is "0x...."
[1] "0x58adb5918510"

> sw2 <- xptr::new_xptr(xptr::xptr_address(sw))       # cloned (!!) but unclassed

> attr(sw2, "class") <- c("stopwatch", "externalptr") # class it .. and then use it!

> print(sw2)                                          # `xptr` allows us close and use
0.501597 

> sw3 <- ch$Stopwatch(  xptr::xptr_address(sw) )      # new Python object via string ctor

> print(sw3$elapsed())                                # shows output via Python I/O
datetime.timedelta(microseconds=502013)

> cat(sw3$count(), "\n")                              # shows double
0.502657 

> print(sw)                                           # object still works in R
0.502721 
> 

The same object, instantiated in R is used in Python and thereafter again in R. While this object here is minimal in features, the concept of passing a pointer is universal. We could use it for any interesting object that R can access and Python too can instantiate. Obviously, there be dragons as we pass pointers so one may want to ascertain that headers from corresponding compatible versions are used etc but principle is unaffected and should just work.

Both parts of this pair of packages are now at the corresponding repositories: PyPI and CRAN. As I commonly do here on package (change) announcements, I include the (minimal so far) set of high-level changes for the R package.

Changes in version 0.0.2 (2026-02-05)

  • Removed replaced unconditional virtualenv use in demo given preceding conditional block

  • Updated README.md with badges and an updated demo

Changes in version 0.0.1 (2026-01-25)

  • Initial version and CRAN upload

Questions, suggestions, bug reports, … are welcome at either the (now awoken from the R-Forge slumber) Rcpp mailing list or the newer Rcpp Discussions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 February, 2026 05:11PM

Vincent Bernat

Fragments of an adolescent web

I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1

The word “blog� does not exist yet. Wikipedia has yet to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. 🗺�

The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape open-sources its browser.

France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering a phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and cracked in a few seconds.

These pages bear the trace of the web’s adolescence. Thirty years have passed. The same battles continue: data selling, advertising, monopolies.


  1. Most articles linked here are not translated from French to English. ↩�

  2. I recently noticed that Google no longer fully indexes my blog. For example, it is no longer possible to find the article on lanĉo. I assume this is a consequence of the explosion of AI-generated content or a change in priorities for Google. ↩�

08 February, 2026 02:51PM by Vincent Bernat

Thorsten Alteholz

My Debian Activities in January 2026

Debian LTS/ELTS

This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities).

During my allocated time I uploaded or worked on:

  • [DLA 4449-1] zvbi security update to fix five CVEs related to uninitialized pointers and integer overflows.
  • [DLA 4450-1] taglib security update to fix one CVE related to a segmentation violation.
  • [DLA 4451-1] shapelib security update to fix one CVE related to a double free.
  • [DLA 4454-1] libuev security update to fix one CVE related to a buffer overrun.
  • [ELA-1620-1] zvbi security update to fix five CVEs in Buster and Stretch related to uninitialized pointers and integer overflows.
  • [ELA-1621-1] taglib security update to fix one CVE in Buster and Stretch related to a segmentation violation.
  • [#1126167] bookworm-pu bug for zvbi to fix five CVEs in Bookworm.
  • [#1126273] bookworm-pu bug for taglib to fix one CVE in Bookworm.
  • [#1126370] bookworm-pu bug for libuev to fix one CVE in Bookworm.

I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as ignored.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Lomiri

This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded a new upstream version or a bugfix version of:

Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.

08 February, 2026 01:25PM by alteholz

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2025 edition

Another year of data from Société de Transport de Montréal, Montreal's transit agency!

A few highlights this year:

  1. Although the Saint-Michel station closed for emergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for the new Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.

  2. The effects of the opening of the Royalmount shopping center has had a durable impact on the traffic at the De la Savane station. I reported on this last year, but it seems this wasn't just a fad.

  3. With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. The Édouard-Montpetit station has nearly reached its previous all-time record of 2015 and the McGill station has recovered from the general slump all the other stations have had in 2025.

  4. The Assomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.

  5. Although still affected by a very high seasonality, the Jean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands.

More generally, it seems the Montreal subway has had a pretty bad year. Traffic had been slowly climbing back since the COVID-19 pandemic, but this is the first year since 2020 such a sharp decline can be witnessed. Even major stations like Jean-Talon or Lionel-Groulx are on a downward trend and it is pretty worrisome.

As for causes, a few things come to mind. First of all, as the number of Montrealers commuting to work by bike continues to rise1, a modal shift from public transit to active mobility is to be expected. As local experts put it, this is not uncommon and has been seen in other cities before.

Another important factor that certainly turned people away from the subway this year has been the impacts of the continued housing crisis in Montreal. As more and more people get kicked out of their apartments, many have been seeking refuge in the subway stations to find shelter.

Sadly, this also brought a unprecedented wave of incivilities. As riders' sense of security sharply decreased, the STM eventually resorted to banning unhoused people from sheltering in the subway. This decision did bring back some peace to the network, but one can posit damage had already been done and many casual riders are still avoiding the subway for this reason.

Finally, the weekslong STM worker's strike in Q4 had an important impact on general traffic, as it severely reduced the opening hours of the subway. As for the previous item, once people find alternative ways to get around, it's always harder to bring them back.

Hopefully, my 2026 report will be a more cheerful one...

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Licences


  1. Mostly thanks to major improvements to the cycling network and the BIXI bikesharing program. 

08 February, 2026 05:00AM by Louis-Philippe Véronneau

February 06, 2026

Reproducible Builds

Reproducible Builds in January 2026

Welcome to the first monthly report in 2026 from the Reproducible Builds project!

These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. Flathub now testing for reproducibility
  2. Reproducibility identifying projects that will fail to build in 2038
  3. Distribution work
  4. Tool development
  5. Two new academic papers
  6. Upstream patches

Flathub now testing for reproducibility

Flathub, the primary repository/app store for Flatpak-based applications, has begun checking for build reproducibility. According to a recent blog post:

We have started testing binary reproducibility of x86_64 builds targeting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub. While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts.

The test results and status is available on their reproducible builds page.


Reproducibility identifying software projects that will fail to build in 2038

Longtime Reproducible Builds developer Bernhard M. Wiedemann posted on Reddit on “Y2K38 commemoration day T-12” — that is to say, twelve years to the day before the UNIX Epoch will no longer fit into a signed 32-bit integer variable on 19th January 2038.

Bernhard’s comment succinctly outlines the problem as well as notes some of the potential remedies, as well as links to a discussion with the GCC developers regarding “adding warnings for inttime_t conversions”.

At the time of publication, Bernard’s topic had generated 50 comments in response.


Distribution work

Conda is language-agnostic package manager which was originally developed to help Python data scientists and is now a popular package manager for Python and R.

conda-forge, a community-led infrastructure for Conda recently revamped their dashboards to rebuild packages straight to track reproducibility. There have been changes over the past two years to make the conda-forge build tooling fully reproducible by embedding the ‘lockfile’ of the entire build environment inside the packages.


In Debian this month:


In NixOS this month, it was announced that the GNU Guix Full Source Bootstrap was ported to NixOS as part of Wire Jansen bachelor’s thesis (PDF). At the time of publication, this change has landed in NiX’ stdev distribution.


Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.


Tool development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 310 and 311 to Debian.

  • Fix test compatibility with u-boot-tools version 2026-01. []
  • Drop the implied Rules-Requires-Root: no entry in debian/control. []
  • Bump Standards-Version to 4.7.3. []
  • Reference the Debian ocaml package instead of ocaml-nox. (#1125094)
  • Apply a patch by Jelle van der Waa to adjust a test fixture match new lines. []
  • Also the drop implied Priority: optional from debian/control. []


In addition, Holger Levsen uploaded two versions of disorderfs, first updating the package from FUSE 2 to FUSE 3 as described in last months report, as well as updating the packaging to the latest Debian standards. A second upload (0.6.2-1) was subsequently made, with Holger adding instructions on how to add the upstream release to our release archive and incorporating changes by Roland Clobus to set _FILE_OFFSET_BITS on 32-bit platforms, fixing a build failure on 32-bit systems. Vagrant Cascadian updated diffoscope in GNU Guix to version 311-2-ge4ec97f7 and disorderfs to 0.6.2.


Two new academic papers

Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published a paper this month titled Docker Does Not Guarantee Reproducibility:

[…] While Docker is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing Dockerfiles enabling reproducible image building. Then, we perform a large-scale empirical study of 5,298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.

A PDF of their paper is available online.


Quentin Guilloteau, Antoine Waehren and Florina M. Ciorba of the University of Basel in Sweden also published a Docker-related paper, theirs called Longitudinal Study of the Software Environments Produced by Dockerfiles from Research Artifacts:

The reproducibility crisis has affected all scientific disciplines, including computer science (CS). To address this issue, the CS community has established artifact evaluation processes at conferences and in journals to evaluate the reproducibility of the results shared in publications. Authors are therefore required to share their artifacts with reviewers, including code, data, and the software environment necessary to reproduce the results. One method for sharing the software environment proposed by conferences and journals is to utilize container technologies such as Docker and Apptainer. However, these tools rely on non-reproducible tools, resulting in non-reproducible containers. In this paper, we present a tool and methodology to evaluate variations over time in software environments of container images derived from research artifacts. We also present initial results on a small set of Dockerfiles from the Euro-Par 2024 conference.

A PDF of their paper is available online.


Miscellaneous news

On our mailing list this month:

Lastly, kpcyrd added a Rust section to the Stable order for outputs page on our website. []


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 February, 2026 08:04PM

Birger Schacht

Status update, January 2026

January was a slow month, I only did three uploads to Debian unstable:

  • xdg-desktop-portal-wlr updated to 0.8.1-1
  • swayimg updated to 4.7-1
  • usbguard updated to 1.1.4+ds-2, which closed #1122733

I was very happy to see the new dfsg-new-queue and that there are more hands now processing the NEW queue. I also finally got one of the packages accepted that I uploaded after the Trixie release: wayback which I uploaded last August. There has been another release since then, I’ll try to upload that in the next few days.

There was a bug report for carl asking for Windows support. carl used the xdg create for looking up the XDG directories, but xdg does not support windows systems (and it seems this will not change) The reporter also provided a PR to replace the dependency with the directories crate which more system agnostic. I adapted the PR a bit and merged it and released version 0.6.0 of carl.

At my dayjob I refactored django-grouper. django-grouper is a package we use to find duplicate objects in our data. Our users often work with datasets of thousands of historical persons, places and institutions and in projects that run over years and ingest data from multiple sources, it happens that entries are created several times. I wrote the initial app in 2024, but was never really happy about the approach I used back then. It was based on this blog post that describes how to group spreadsheet text cells. It uses sklearns TfidfVectorizer with a custom analyzer and the library sparse_dot_topn for creating the matrix. All in all the module to calculate the clusters was 80 lines and with sparse_dot_topn it pulled in a rather niche Python library. I was pretty sure that this functionality could also be implemented with basic sklearn functionality and it was: we are now using DictVectorizer because in a Django app we are working with objects that can be mapped to dicts anyway. And for clustering the data, the app now uses the DBSCAN algorithm (with the manhattan distance as metric). The module is now only half the size and the whole app lost one dependency! I released those changes as version 0.3.0 of the app.

At the end of January together with friends I went to Brussels to attend FOSDEM. We took the night train but there were a couple of broken down trains so the ride took 26 hours instead of one night. It is a good thing we had a one day buffer and FOSDEM only started on Saturday. As usual there were too many talks to visit, so I’ll have to watch some of the recordings in the next few weeks.

Some examples of talks I found interesting so far:

06 February, 2026 05:28AM

February 05, 2026

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 2.3.3: Limited Rebirth

rfoaas greed example

The original FOAAS site provided a rather wide variety of REST access points, but it sadky is no more (while the old repo is still there). A newer replacement site FOASS is up and running, but with a somewhat reduced offering. (For example, the two accessors shown in the screenshot are no more. C’est la vie.)

Recognising that perfect may once again be the enemy of (somewhat) good (enough), we have rejigged the rfoaas package in a new release 2.3.3. (The precding version number 2.3.2 corresponded to the upstream version, indicating which API release we matched. Now we just went ‘+ 0.0.1’ but there is no longer a correspondence to the service version at FOASS.)

Accessor functions for each of the now available access points are provided, ans the random sampling accessor getRandomFO() now picks from that set.

My CRANberries service provides a comparison to the previous release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

05 February, 2026 01:00AM

February 04, 2026

Swiss JuristGate

JuristGate is for sale: three billion Swiss francs for a domain name

When the share price of Credit Suisse, one of Switzerland's two top banks, went into its death spiral, the Swiss authorities had some discussions among themselves and the inevitable result of that discussion was the sale of Credit Suisse to their main rival, UBS.

The public were told that UBS had offered three billion Swiss francs to compensate the shareholders of Credit Suisse for their shares. Some people were sceptical about the method used to reach this valuation. Nonetheless, it was important for Swiss national pride.

When writing the reports on the JuristGate web site, I've attempted to operate with the highest level of accuracy and integrity. Nonetheless, everybody has their price. If somebody offered me three billion Swiss francs for the domain then I would probably do like Urban Angehrn and Birgit Rutishauser and take a lengthy garden leave.

In the meantime, to make up for the wayward support from Swiss legal insurance I've started a crowdfunding campaign to resolve one of the disputes relating to Debian. This is vital because we all do a lot of work for Debian and we are all entitled to equal recognition.

In a previous blog, I looked at the prompt and efficient manner in which Bernice is undertaking enforcement action to protect the public from rogue health practitioners in the State of Victoria.

Bernice generated a lot of news stories when she banned a social media influencer, Emily Lal, who promoted herself as The Authentic Birthkeeper. Lal had promoted wild birthing. A number of mothers and babies have died around the world when following the advice of social media influencers.

I don't know how much Bernice gets paid but I found a job vacancy for her deputy. The salary is in the range up to $290,000 per year. That is approximately CHF 160,000 Swiss francs.

As Bernice is a nurse and keyworkers never get paid what they are really worth I'm guessing her salary is not too much bigger than that.

The FINMA annual reports reveal the salaries for the CEO and executive team. Urban Angehrn's salary was CHF 602,000. It looks like FINMA pays their CEO three or four times what the State of Victoria pays Bernice.

We can compare their performance and see how many banning orders each of them has published:

FINMA (Swiss financial regulator) banning orders: they published 24 bans between 2018 and 2025. Some bans are not reported publicly.

Health Complaints Commissioner (Victoria) banning orders: Bernice has personally signed over 100 interim or permanent bans since taking office in July 2022.

This was Urban Angehrn's payout, CHF 581,000, when he resigned at the same time that FINMA published their anonymous decision about Parreaux, Thiebaud & Partners. Clients/victims were offered nothing.

Urban Angehrn, FINMA, garden leave, termination payment

 

Read more of the JuristGate reports.

04 February, 2026 06:30PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.23 on CRAN: More Features (and Fixes)

max-heap image

The twentythird release of littler as a CRAN package landed on CRAN just now, following in the now twenty year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in later years.

littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release, the first in about eleven months, once again brings two new helper scripts, and enhances six existing one. The release was triggered because it finally became clear why installGitHub.r ignored r2u when available: we forced the type argument to ‘source’ (so thanks to Iñaki for spotting this). One change was once again contributed by Michael which is again greatly appreciated.

The full change description follows.

Changes in littler version 0.3.22 (2026-02-03)

  • Changes in examples scripts

    • A new script busybees.r aggregates deadlined packages by maintainer

    • Several small updated have been made to the (mostly internal) 'r2u.r' script

    • The deadliners.r script has refined treatment for screen width

    • The install2.r script has new options --quiet and --verbose as proposed by Zivan Karaman

    • The rcc.r script passes build-args to 'rcmdcheck' to compact vignettes and save data

    • The installRub.r script now defaults to 'noble' and is more tolerant of inputs

    • The installRub.r script deals correctly with empty utils::osVersion thanks to Michael Chirico

    • New script checkPackageUrls.r inspired by how CRAN checks (with thanks to Kurt Hornik for the hint)

    • The installGithub.r script now adjusts to bspm and takes advantage of r2u binaries for its build dependencies

  • Changes in package

    • Environment variables (read at build time) can use double quotes

    • Continuous intgegration scripts received a minor update

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

04 February, 2026 12:37PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in January 2026

04 February, 2026 11:39AM by Ben Hutchings

February 03, 2026

hackergotchi for Jonathan Dowland

Jonathan Dowland

FOSDEM 2026 talk recording available

FOSDEM 2026 was great! I hope to blog a proper postmortem in due course. But for now, The video of my talk is up, as are my slides with speaker notes and links.

03 February, 2026 04:21PM

February 01, 2026

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

What do people do when they edit Wikipedia through Tor?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This post is closely based on a previously published post by Kaylea Champion on the Community Data Science Blog.

Many individuals use Tor to reduce their visibility to widespread internet surveillance.

One popular approach to protecting our privacy online is to use the Tor network. Tor protects users from being identified by their IP address, which can be tied to a physical location. However, if you’d like to contribute to Wikipedia using Tor, you’ll run into a problem. Although most IP addresses can edit without an account, Tor users are blocked from editing.

Tor users attempting to contribute to Wikipedia are shown a screen that informs them that they are not allowed to edit Wikipedia.

Other research by my team has shown that Wikipedia’s attempt to block Tor is imperfect and that some people have been able to edit despite the ban. As part of this work, we built a dataset of more than 11,000 contributions to Wikipedia via Tor and used quantitative analysis to show that contributions from Tor were of about the same quality as contributions from other new editors and other contributors without accounts. Of course, given the unusual circumstances Tor-based contributors face, we wondered whether a deeper look at the content of their edits might reveal more about their motives and the kinds of contributions they seek to make. Kaylea Champion (then a student, now faculty at UW Bothell) led a qualitative investigation to explore these questions.

Given the challenges of studying anonymity seekers, we designed a novel “forensic” qualitative approach inspired by techniques common in computer security and criminal investigation. We applied this new technique to a sample of 500 editing sessions and categorized each session based on what the editor seemed to be intending to do.

Most of the contributions we found fell into one of the two following categories:

  • Many contributions were quotidian attempts to add to the encyclopedia. Tor-based editors added facts, fixed typos, and updated train schedules. There’s no way to know whether these individuals knew they were just getting lucky in their ability to edit or were patiently reloading to evade the ban.
  • Second, we found harassing comments and vandalism. Unwelcome conduct is common in online environments, and it is sometimes more common when the likelihood of being identified decreases. Some of the harassing comments we observed were direct responses to being banned as a Tor user.

Although these were most of what we observed, we also found evidence of several types of contributor intent:

  • We observed activism: when a contributor tried to draw attention to journalistic accounts of environmental and human rights abuses committed by a mining company, only to have editors linked to the mining company repeatedly remove their edits. Another example involved an editor trying to diminish the influence of proponents of alternative medicine.
  • We also observed quality maintenance activities when editors used Wikipedia’s rules on appropriate sourcing to remove personal websites being cited in conspiracy theories.
  • We saw edit wars with Tor editors participating in a back-and-forth removal and replacement of content as part of a dispute, in some cases countering the work of an experienced Wikipedia editor whom even other experienced editors had gauged to be biased.
  • Finally, we saw Tor-based editors participating in non-article discussions, such as investigations into administrator misconduct and protests against the Wikipedia platform’s mistrust of Tor editors.
An exploratory mapping of our themes in terms of the value a type of contribution represents to the Wikipedia community and the importance of anonymity in facilitating it. Anonymity-protecting tools play a critical role in facilitating contributions on the right side of the figure, while edits on the left are more likely to occur even when anonymity is impossible. Contributions toward the top reflect valuable forms of participation in Wikipedia, while edits at the bottom reflect damage.

In all, these themes led us to reflect on how the risks individuals face when contributing to online communities sometimes diverge from the risks the communities face in accepting their work. Expressing minoritized perspectives, maintaining community standards even when you may be targeted by the rulebreaker, highlighting injustice, or acting as a whistleblower can be very risky for an individual, and may not be possible without privacy protections. Of course, in platforms seeking to support the public good, such knowledge and accountability may be crucial.


This work was published as a paper at CSCW: Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. 2019. A Forensic Qualitative Analysis of Contributions to Wikipedia from Anonymity Seeking Users. Proceedings of the ACM on Human-Computer Interaction. 3, CSCW, Article 53 (November 2019), 26 pages. https://doi.org/10.1145/3359155

This project was conducted by Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. This work was supported by the National Science Foundation (awards CNS-1703736 and CNS-1703049) and included the work of two undergraduates supported through an NSF REU supplement.

01 February, 2026 12:19PM by Benjamin Mako Hill

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got rid of documents I had for last year's Tax return.

Got rid of documents I had for last year's Tax return. Now I have the least document in my bookshelf out of the year.

01 February, 2026 05:54AM by Junichi Uekawa

Russ Allbery

Review: Paladin's Faith

Review: Paladin's Faith, by T. Kingfisher

Series: The Saint of Steel #4
Publisher: Red Wombat Studio
Copyright: 2023
ISBN: 1-61450-614-0
Format: Kindle
Pages: 515

Paladin's Faith is the fourth book in T. Kingfisher's loosely connected series of fantasy novels about the berserker former paladins of the Saint of Steel. You could read this as a standalone, but there are numerous (spoilery) references to the previous books in the series.

Marguerite, who was central to the plot of the first book in the series, Paladin's Grace, is a spy with a problem. An internal power struggle in the Red Sail, the organization that she's been working for, has left her a target. She has a plan for how to break their power sufficiently that they will hopefully leave her alone, but to pull it off she's going to need help. As the story opens, she is working to acquire that help in a very Marguerite sort of way: breaking into the office of Bishop Beartongue of the Temple of the White Rat.

The Red Sail, the powerful merchant organization Marguerite worked for, makes their money in the salt trade. Marguerite has learned that someone invented a cheap and reproducible way to extract salt from sea water, thus making the salt trade irrelevant. The Red Sail wants to ensure that invention never sees the light of day, and has forced the artificer into hiding. Marguerite doesn't know where they are, but she knows where she can find out: the Court of Smoke, where the artificer has a patron.

Having grown up in Anuket City, Marguerite was familiar with many clockwork creations, not to mention all the ways that they could go horribly wrong. (Ninety-nine times out of a hundred, it was an explosion. The hundredth time, it ran amok and stabbed innocent bystanders, and the artificer would be left standing there saying, "But I had to put blades on it, or how would it rake the leaves?" while the gutters filled up with blood.)

All Marguerite needs to put her plan into motion is some bodyguards so that she's not constantly distracted and anxious about being assassinated. Readers of this series will be unsurprised to learn that the bodyguards she asks Beartongue for are paladins, including a large broody male one with serious self-esteem problems.

This is, like the other books in this series, a slow-burn romance with infuriating communication problems and a male protagonist who would do well to seek out a sack of hammers as a mentor. However, it has two things going for it that most books in this series do not: a long and complex plot to which the romance takes a back seat, and Marguerite, who is not particularly interested in playing along with the expected romance developments. There are also two main paladins in this story, not just one, and the other is one of the two female paladins of the Saint of Steel and rather more entertaining than Shane.

I generally like court intrigue stories, which is what fills most of this book. Marguerite is an experienced operative, so the reader gets some solid competence porn, and the paladins are fish out of water but are also unexpectedly dangerous, which adds both comedy and satisfying table-turning. I thoroughly enjoyed the maneuvering and the culture clashes. Marguerite is very good at what she does, knows it, and is entirely uninterested in other people's opinions about that, which short-circuits a lot of Shane's most annoying behavior and keeps the story from devolving into mopey angst like some of the books in this series have done.

The end of this book takes the plot in a different direction that adds significantly to the world-building, but also has a (thankfully short) depths of despair segment that I endured rather than enjoyed. I am not really in the mood for bleak hopelessness in my fiction at the moment, even if the reader is fairly sure it will be temporary. But apart from that, I thoroughly enjoyed this book from beginning to end. When we finally meet the artificer, they are an absolute delight in that way that Kingfisher is so good at. The whole story is infused with the sense of determined and competent people refusing to stop trying to fix problems. As usual, the romance was not for me and I think the book would have been better without it, but it's less central to the plot and therefore annoyed me less than any of the books in this series so far.

My one major complaint is the lack of gnoles, but we get some new and intriguing world-building to make up for it, along with a setup for a fifth book that I am now extremely curious about.

By this point in the series, you probably know if you like the general formula. Compared to the previous book, Paladin's Hope, I thought Paladin's Faith was much stronger and more interesting, but it's clearly of the same type. If, like me, you like the plots but not the romance, the plot here is more substantial. You will have to decide if that makes up for a romance in the typical T. Kingfisher configuration.

Personally, I enjoyed this quite a bit, except for the short bleak part, and I'm back to eagerly awaiting the next book in the series.

Rating: 8 out of 10

01 February, 2026 04:54AM

January 31, 2026

hackergotchi for Joey Hess

Joey Hess

the local weather

Snow coming. I'm tuned into the local 24 hour slop weather stream. AI generated, narrated, up to the minute radar and forecast graphics. People popping up on the live weather map with questions "snow soon?" (They pay for the privilege.) LLM generating reply that riffs on their name. Tuned to keep the urgency up, something is always happening somewhere, scanners are pulling the police reports, live webcam description models add verisimilitude to the description of the morning commute. Weather is happening.

In the subtext, climate change is happening. Weather is a growth industry. The guy up in Kentucky coal country who put this thing together is building an empire. He started as just another local news greenscreener. Dropped out and went twitch weather stream. Hyping up tornado days and dicy snow forecasts. Nowcasting, hyper individualized, interacting with chat. Now he's automated it all. On big days when he's getting real views, the bot breaks into his live streams, gives him a break.

Only a few thousand watching this morning yet. Perfect 2026 grade slop. Details never quite right, but close enough to keep on in the background all day. Nobody expects a perfect forecast after all, and it's fed from the National Weather Center discussion too. We still fund those guys? Why bother when a bot can do it?

He knows why he's big in these states, these rural areas. Understands the target audience. Airbrushed AI aesthetics are ok with them, receive no pushback. Flying more under the radar coastally, but weather is big there and getting bigger. The local weather will come for us all.

6 inches of snow covering some ground mount solar panels with a vertical solar panel fence behind them free of snow except cute little caps

(Not fiction FYI.)

31 January, 2026 06:25PM

hackergotchi for Michael Prokop

Michael Prokop

apt, SHA-1 keys + 2026-02-01

You might have seen Policy will reject signature within a year warnings in apt(-get) update runs like this:

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Get:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]
Fetched 5326 B in 0s (43.2 kB/s)
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details

root@424812bd4556:/# apt --audit update
Hit:1 http://foo.example.org/debian demo InRelease
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
All packages are up to date.    
Warning:  http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
Audit:  http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
   Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:
              No binding signature at time 2024-06-19T10:33:47Z
     because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
     because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
Audit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sources
Audit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'
Audit: Consider migrating all sources.list(5) entries to the deb822 .sources format
Audit: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Audit: See apt-secure(8) for best practices in configuring repository signing.
Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.

If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you’re reading this because you’re already affected).

Let’s simulate the future:

root@424812bd4556:/# apt --update -y install faketime
[...]
root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" 
root@424812bd4556:/# date
Sat Aug 29 23:42:11 UTC 2026

root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease                                 
Err:1 http://foo.example.org/debian demo InRelease
  Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
root@424812bd4556:/# echo $?
100

Now, the proper solution would have been to fix the signing key underneath (via e.g. sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed).

If you don’t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you’re out of luck for a proper fix.

But there’s a workaround for the apt situation (related see apt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):

root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config
# Default APT Sequoia configuration. To overwrite, consider copying this
# to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the
# desired values.
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-02-01    # Extend the expiry for legacy repositories
sha224 = 2026-02-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Adjust this according to your needs:

root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/

root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.config

root@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config
# APT Sequoia override configuration
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01

[hash_algorithms]
sha1.second_preimage_resistance = 2026-09-01    # Extend the expiry for legacy repositories
sha224 = 2026-09-01

[packets]
signature.v3 = 2026-02-01   # Extend the expiry

Then we’re back into the original situation, being a warning instead of an error:

root@424812bd4556:/# apt update
Hit:1 http://deb.debian.org/debian trixie InRelease
Get:2 http://foo.example.org/debian demo InRelease [4229 B]
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
[..]

Please note that this is a workaround, and not a proper solution.

31 January, 2026 01:57PM by mika

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Dialogue

Me: Do you want your coffee in a Japanese or Western style tea cup?

M: Yunomi.

Me: Apparently not as well as you think I do!

31 January, 2026 10:59AM by Benjamin Mako Hill

Russ Allbery

Review: Dragon Pearl

Review: Dragon Pearl, by Yoon Ha Lee

Series: Thousand Worlds #1
Publisher: Rick Riordan Presents
Copyright: 2019
ISBN: 1-368-01519-0
Format: Kindle
Pages: 315

Dragon Pearl is a middle-grade space fantasy based on Korean mythology and the first book of a series.

Min is a fourteen-year-old girl living on the barely-terraformed world of Jinju with her extended family. Her older brother Jun passed the entrance exams for the Academy and left to join the Thousand Worlds Space Forces, and Min is counting the years until she can do the same. Those plans are thrown into turmoil when an official investigator appears at their door claiming that Jun deserted to search for the Dragon Pearl. A series of impulsive fourteen-year-old decisions lead to Min heading for a spaceport alone, determined to find her brother and prove his innocence.

This would be a rather improbable quest for a young girl, but Min is a gumiho, one of the supernaturals who live in the Thousand Worlds alongside non-magical humans. Unlike the more respectable dragons, tigers, goblins, and shamans, gumiho are viewed with suspicion and distrust because their powers are useful for deception. They are natural shapeshifters who can copy the shapes of others, and their Charm ability lets them influence people's thoughts and create temporary illusions of objects such as ID cards. It will take all of Min's powers, and some rather lucky coincidences, to infiltrate the Space Forces and determine what happened to her brother.

It's common for reviews of this book to open with a caution that this is a middle-grade adventure novel and you should not expect a story like Ninefox Gambit. I will be boring and repeat that caution. Dragon Pearl has a single first-person viewpoint and a very linear and straightforward plot. Adult readers are unlikely to be surprised by plot twists; the fun is the world-building and seeing how Min manages to work around plot obstacles.

The world-building is enjoyable but not very rigorous. Min uses and abuses Charm with the creative intensity of a Dungeons & Dragons min-maxer. Each individual event makes sense given the implication that Min is unusually powerful, but I'm dubious about the surrounding society and lack of protections against Charm given what Min is able to do. Min does say that gumiho are rare and many people think they're extinct, which is a bit of a fig leaf, but you'll need to bring your urban fantasy suspension of disbelief skills to this one.

I did like that the world-building conceit went more than skin deep and influenced every part of the world. There are ghosts who are critical to the plot. Terraforming is done through magic, hence the quest for the Dragon Pearl and the miserable state of Min's home planet due to its loss. Medical treatment involves the body's meridians, as does engineering: The starships have meridians similar to those of humans, and engineers partly merge with those meridians to adjust them. This is not the sort of book that tries to build rigorous scientific theories or explain them to the reader, and I'm not sure everything would hang together if you poked at it too hard, but Min isn't interested in doing that poking and the story doesn't try to justify itself. It's mostly a vibe, but it's a vibe that I enjoyed and that is rather different than other space fantasy I've read.

The characters were okay but never quite clicked for me, in part because proper character exploration would have required Min take a detour from her quest to find her brother and that was not going to happen. The reader gets occasional glimpses of a military SF cadet story and a friendship on false premises story, but neither have time to breathe because Min drops any entanglement that gets in the way of her quest. She's almost amoral in a way that I found believable but not quite aligned with my reading mood. I also felt a bit wrong-footed by how her friendships developed; saying too much more would be a spoiler, but I was expecting more human connection than I got.

I think my primary disappointment with this book was something I knew going in, not in any way its fault, and part of the reason why I'd put off reading it: This is pitched at young teenagers and didn't have quite enough plot and characterization complexity to satisfy me. It's a linear, somewhat episodic adventure story with some neat world-building, and it therefore glides over the spots where an adult novel would have added political and factional complexity. That is exactly as advertised, so it's up to you whether that's the book you're in the mood for.

One warning: The text of this book opens with an introduction by Rick Riordan that is just fluff marketing and that spoils the first few chapters of the book. It is unmarked as such at the beginning and tricked me into thinking it was the start of the book proper, and then deeply annoyed me. If you do read this book, I recommend skipping the utterly pointless introduction and going straight to chapter one.

Followed by Tiger Honor.

Rating: 6 out of 10

31 January, 2026 05:26AM

January 30, 2026

Swiss JuristGate

MIT DEDP MicroMasters online learner's blog post about cover-up linked to resignation of Swiss financial regulator

MIT's DEDP MicroMasters is about Data, Economics and Development Policy. It was recently renamed, Data Economics & Design of Policy although the focus remains on developing countries, not rich countries like Switzerland. From time to time, I've seen people asking what a MicroMasters certificate is really worth. Ironically, that would be a great question for an MIT economist to answer.

In one of the modules, 14.750x Political Economy, Prof Ben Olken begins by asking the online learners to read papers Hit or Miss? The Effect of Assassinations on Institutions and War and Do leaders matter? National leadership and growth since world war II. The above average rate of assassinations for people with this job category is used to give us insights. There is nothing in the paper about leaders kidnapped by President Trump. Trump himself was nearly assassinated before his own return to office. The paper is available online.

Benjamin Jones, Benjamin Olken, Hit or Miss? The Effect of Assassinations on Institutions and War

 

In earlier blogs, we were able to prove the anonymous document on the FINMA web site relates to Parreaux, Thiébaud & Partners, Switzerland's "Law Firm X". From there, we were able to prove that FINMA, the regulatory authority knew about it for some years and the Geneva bar association also seemed to know for a long time.

In the world of abuse, it seems that priests knew some of their colleagues were paedophiles but they have a code of conduct, the Crimen Sollicitationis which prevented priests from warning the public about their own colleagues. The Swiss jurists from the bar association and officials from FINMA appeared to be operating from the same playbook. Those who knew about the scandal didn't tell the clients. They put the reputation of their profession and the privacy of rogue colleagues ahead of the interests of public safety and justice.

We then went the next step, showing that FINMA participated in a cover-up. Reports on this web site meticulously reverse-engineering their cover-up tactics.

From the moment the illegal legal insurance launched in 2018, Birgit Rutishauser had been head of FINMA's surveillance of the insurance industry.

Rutishauser is a graduate of ETH Zurich. Many of the attacks on my family revolve around the death of Adrian von Bidder-Senn on our wedding day. He was also an ETH Zurich graduate like Rutishauser. Just as the priests and the jurists have sought to maintain strict silence about the wrongdoing of colleagues, it seems the ETH Zurich alumni are maintaining silence. After all, Adrian von Bidder-Senn's wife obtained a PhD in cybersecurity from the same institution but looking at the last email she sent to the Debianists, we can see that Dr Diana von Bidder-Senn failed to realize the extent to which her husband was a victim of social engineering. Dr von Bidder-Senn is now the mayor of Basel.

When Rutishauser was appointed in 2018, FINMA published the following comments about her background:

The 46-year-old mathematician and actuary has managed the Risk Management section of the Insurance division since June 2016. Prior to joining FINMA, Birgit Rutishauser spent many years working in a variety of management roles within the insurance sector, most recently as Chief Underwriting & Risk Management Officer at Nationale Suisse and, as such, a member of the Group Executive Management Board. Birgit Rutishauser is a Swiss citizen.

Rutishauser had also worked for Zurich insurance, the same company where Urban Angehrn had worked prior to joining FINMA.

By comparison, I am a holder of the MIT MicroMasters diploma in Data, Economics & Development Policy (DEDP). Well, I also worked for a few banks too. UBS in Zurich provided a reference letter:

Daniel Pocock, UBS, Zurich, cybersecurity

 

Competitors and cyberbullies have spent an enormous amount of effort trying to trick people to believe that I'm a mentally ill person who only pretends to be a developer:

Subject: Re: Open Letter to Debian election candidates about Debian vendettas
Date: Sun, 20 Mar 2022 21:00:02 +0900
From: Hideki Yamane <henrich@iijmio-mail.jp>
To: Daniel Pocock <daniel@pocock.pro>
CC: debian-devel@lists.debian.org

Hi,

 [ ... snip gaslighting ... ]

 Before talking, you should get counseling for a while since it seems that
 you have some cognitive troubles now. You'd be better to hear about your
 opinion and current your mind status from professional 3rd parties, not Debian.
 (If they say you're very healthy and good, then that's good. Don't you think so?)

 Without that, we cannot make a constructive conversation.
  As I said in my platform, "Be calm, stay cool, stay safe" - Hope you stay "cool"
 a bit with help from professionals, and you would to be able to a "contributor"
 to floss again.


 Life is short - to waste our time for fighting. Let's make more values for users.


-- 
Hideki Yamane <henrich@iijmio-mail.jp>

The rumours about mental illness were obviously falsified. Nonetheless, it is important to remember they started falsifying these things at a time when I lost two family members. Is it an example of cybertorture or is it simply an example of how rude these people are after spending too much time in social control media?

Is it possible that a mentally ill fake developer with a MicroMasters was able to write a blog post that brought down the Deputy CEO of the Swiss financial market regulator?

Rather than finding me to be insane, Switzerland granted me citizenship in the Canton of Vaud in November 2023:

Daniel Pocock, Switzerland

 

Early 2024, I came across a photo of Switzerland's attorney general visiting his peers at the Parquet National Financier in Paris, France to talk about cross-border crime. The illegal legal insurance was an example of cross-border crime because they recruited people from France to work for them and they promoted the insurance to French residents.

The meeting and photo has since been removed from the PNF web site but it can be found in the Wayback Machine.

Rencontre avec M. Stefan Blättler, procureur général de la Confédération helvétique

Le 13 février, les quatre chefs de juridiction du tribunal judiciaire de Paris ont reçu M. Stefan Blättler, procureur général de la Confédération helvétique.

Ces échanges stratégiques ont permis d’aborder les questions d’entraide pénale internationale et de lutte contre la criminalité organisée, le terrorisme, les crimes contre l'humanité et la délinquance financière.

Ils ont été l’occasion de rappeler la détermination commune des autorités judiciaires françaises et helvétiques à lutter contre les formes complexes de criminalité et de réaffirmer le caractère essentiel d'un dialogue soutenu et cordial entre pays voisins, partageant une frontière et une relation longue.

Stefan Blättler, procureur général de la Confédération helvétique, PNF, Paris

 

I wrote to the PNF and sent them some of the documents. This was their reply:

PNF, Parreaux, Thiebaud & Partners, Matthieu Parreaux

 

The PNF, like me, suspects that a crime was committed under French law but for technical reasons, they require a French citizen to bring a formal complaint to a local prosecutor first. The law in France would see me as a witness to the crime.

Later in 2024, I acquired the domain name www.michaelmcgrath.ie, that is the former domain name of Michael McGrath, the EU Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection.

I used the EU Justice Commissioner's former domain name to publish information about the cross-border crime from the Swiss jurists.

Michael McGrath

 

The blogs published between January and March 2025 continue to build the case that FINMA not only knew about the illegal legal insurance but they also had a role in the cover-up.

The blog published on 8 March 2025, International Women's Day, considered the case of a French woman who was tricked to quit the job she had for seven years and come to work for the Swiss jurists in Geneva selling illegal legal insurance.

On 27 March 2025, I was discussing the case with an expert in Paris at the same time the BBC was interviewing former Archbishop of Canterbury Justin Welby about the cover-up of their jurist John Smyth QC. In the meeting in Paris, I was asked if there was a link between the paedophiles and the jurists. The BBC's report only appeared after the meeting and it prompted me to write a new blog post about the culture of cover-ups.

When famed whistleblower Trevor Kitchen exposed the Forex scandal, he didn't go as far as comparing anybody to a paedophile, nonetheless, they had him arrested in Portugal and tried to have him extradited back to Switzerland for criminal speech.

Finance chief who exposed currency scandal fights Swiss extradition bid for criminal defamation

...

Mr Kitchen, who worked as a financial controller for companies including Shell, Castrol and Black and Decker, lost his 700,000 franc pension due to the rigged currency fluctuations.

...

“Because I had specialised in finance all my life understanding policies and procedures in companies, I went and reported this to all the regulators,” he said.

Toby Cadman, head and co-founder of Guernica 37 International Justice Chambers, who is working pro bono for Mr Kitchen, said: “If he was in the UK, this would not even get past the issuing of a warrant, let alone the extradition process.

...

He was arrested by Portuguese police on January 19 and taken for questioning. “It was all maximum security stuff. Six policemen were around me and took me into a small room and told me to take all my clothes off. They threw me in prison for 48 hours all because of the words I used,” he said.

The Swiss embassy has been contacted for comment.

Yet Swiss authorities have made no such move against me. Without admitting or denying any of the allegations on this web site, Birgit Rutishauser's resignation was announced on 1 April 2025, barely two days after the strongest blog post about the complicity of the authorities in the cover-up.

First of all, FINMA published a notice about Rutishauser's resignation.

The same day, FINMA published a longer notice about their restructuring. The notice included a paragraph emphasizing that Birgit Rutishauser's departure was not part of the restructuring. In other words, there is some other specific reason she has chosen to tender her resignation and they are not going to give us any more detail about it.

FINMA adapts its organisation to meet future challenges

...

Independently of the introduction of the new organisational structure, Birgit Rutishauser, member of the Executive Board, Deputy CEO and Head of the Insurance division, has decided to leave FINMA. Vera Carspecken will assume the leadership of the Insurance division on an interim basis from 1 May 2025 (see separate press release of 1 April 2025).

Somebody with so many years of service would normally be entitled to three months of notice period. The abrupt departure of Martin Senn from Zurich insurance was referred to as a factor in his suicide. However, it is also possible that Rutishauser was placed on an extended period of garden leave to prevent her from immediately being able to use confidential information she has acquired in the course of her duties.

Rutishauser's LinkedIn profile tells us she is now at StenFo, that is the fund for a Swiss nuclear dump.

All the waste from five decades of Swiss nuclear programs is currently stored in a temporary facility at Zwilag. If they ever try to build a permanent underground storage facility there will be a referendum to stop it. It just sits in the temporary facility for the time being:

Zwilag, Stenfo, Birgit Rutishauser

 

Birgit Rutishauser

 

President Putin frequently reminds the world about nuclear punishments. What we see here is the power of nuclear teamwork.

Fifty percent of Swiss nuclear fuel comes from Russia, twenty five percent from Canada and twenty five percent was mined in Australia, where I was born.

After leaving her post as second in command at FINMA, head of the insurance division, Rutishauser enrolled at IMD, a well known Swiss business school, where she was awarded a certificate in blockchain.

Birgit Rutishauser, IMD, Blockchain

 

While there are some companies doing completely legitimate projects with blockchain, it is ironic that the scammers who had been allowed to operate for years under FINMA have also pursued a new career path promoting cryptocurrency "investment" to victims in France.

I started the Software Freedom Institute in May 2021 in the middle of the pandemic. Many people were not working at all. I could have chosen not to work and claimed furlough payments from the government. Instead, by working, I was paying tax and contributing money to the social security system to support the rest of society.

I purchased and paid for various insurance services to protect the business. Invoices for Parreaux, Thiebaud & Partners legal services. When IBM Red Hat attacked my business, the Swiss legal protection did nothing to help. The ADR Forum legal panel eventually gave a ruling against IBM Red Hat, declaring that I was a victim of harassment from a much larger company/competitor.

Likewise, after the conviction of Cardinal George Pell, rogue Debianists attacked my family with rumours about abuse and they attacked us again after I founded the Software Freedom Institute. The Swiss jurists did nothing to help. I went to Italy by myself to speak to the Carabinieri and after years of inquiries, the Cardinal died four hours after I filed a report about exploitation. Despite Switzerland's reputation for privacy, the Swiss jurists provided no help whatsoever to protect my family and I from these intrusions.

Justin Welby had to resign as Archbishop of Canterbury due to his failure to take action against the paedophile jurist John Smyth QC. On 30 March 2025, we published the comparison of Church of England, Crimen Sollicitationis and the cover-up at FINMA and barely two days later, Rutishauser's resignation was announced.

As it is Switzerland, nobody is holding their breath waiting for authorities to confirm the Deputy CEO of FINMA resigned because of the cover-up disclosed in the JuristGate reports. However, we can now try to answer the question we started with, what is the real value of the DEDP MicroMasters from MIT? Priceless.

Daniel Pocock, MIT, DEDP, MicroMasters

 

Read more of the JuristGate reports.

30 January, 2026 02:00PM

January 29, 2026

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Part 3: Building the Keystone – Dataproc Custom Images for Secure Boot & GPUs

Part
3: Building the Keystone – Dataproc Custom Images for Secure Boot &
GPUs

In Part 1, we established a secure, proxy-only network. In Part 2, we
explored the enhanced install_gpu_driver.sh initialization
action. Now, in Part 3, we’ll focus on using the LLC-Technologies-Collier/custom-images
repository (branch proxy-exercise-2025-11) to build the
actual custom Dataproc images embedded with NVIDIA drivers signed for
Secure Boot, all within our proxied environment.

Why Custom Images?

To run NVIDIA GPUs on Shielded VMs with Secure Boot enabled, the
NVIDIA kernel modules must be signed with a key trusted by the VM’s EFI
firmware. Since standard Dataproc images don’t include these
custom-signed modules, we need to build our own. This process also
allows us to pre-install a full stack of GPU-accelerated software.

The
custom-images Toolkit
(examples/secure-boot)

The examples/secure-boot directory within the
custom-images repository contains the necessary scripts and
configurations, refined through significant development to handle proxy
and Secure Boot challenges.

Key Components & Development Insights:

  • env.json: The central configuration
    file (as used in Part 1) for project, network, proxy, and bucket
    details. This became the single source of truth to avoid configuration
    drift.
  • create-key-pair.sh: Manages the Secure
    Boot signing keys (PK, KEK, DB) in Google Secret Manager, essential for
    the module signing.
  • build-and-run-podman.sh: Orchestrates
    the image build process in an isolated Podman container. This was
    introduced to standardize the build environment and encapsulate
    dependencies, simplifying what the user needs to install locally.
  • pre-init.sh: Sets up the build
    environment within the container and calls
    generate_custom_image.py. It crucially passes metadata
    derived from env.json (like proxy settings and Secure Boot
    key secret names) to the temporary build VM.
  • generate_custom_image.py: The core
    Python script that automates GCE VM creation, runs the customization
    script, and creates the final GCE image.
  • gce-proxy-setup.sh: This script from
    startup_script/ is vital. It’s injected into the temporary
    build VM and runs first to configure the OS, package
    managers (apt, dnf), tools (curl, wget, GPG), Conda, and Java to use the
    proxy settings passed in the metadata. This ensures the entire build
    process is proxy-aware.
  • install_gpu_driver.sh: Used as the
    --customization-script within the build VM. As detailed in
    Part 2, this script handles the driver/CUDA/ML stack installation and
    signing, now able to function correctly due to the proxy setup by
    gce-proxy-setup.sh.

Layered Image Strategy:

The pre-init.sh script employs a layered approach:

  1. secure-boot Image: Base image with
    Secure Boot certificates injected.
  2. tf Image: Based on
    secure-boot, this image runs the full
    install_gpu_driver.sh within the proxy-configured build VM
    to install NVIDIA drivers, CUDA, ML libraries (TensorFlow, PyTorch,
    RAPIDS), and sign the modules. This is the primary target image for our
    use case.

(Note: secure-proxy and proxy-tf layers
were experiments, but the -tf image combined with runtime
metadata emerged as the most effective solution for 2.2-debian12).

Build Steps:

  1. Clone Repos & Configure
    env.json:
    Ensure you have the
    custom-images and cloud-dataproc repos and a
    complete env.json as described in Part 1.

  2. Run the Build:
    bash # Example: Build a 2.2-debian12 based image set # Run from the custom-images repository root bash examples/secure-boot/build-and-run-podman.sh 2.2-debian12
    This command will build the layered images, leveraging the proxy
    settings from env.json via the metadata injected into the
    build VM. Note the final image name produced (e.g.,
    dataproc-2-2-deb12-YYYYMMDD-HHMMSS-tf).

Conclusion of Part 3

Through an iterative process, we’ve developed a robust workflow
within the custom-images repository to build Secure
Boot-compatible GPU images in a proxy-only environment. The key was
isolating the build in Podman, ensuring the build VM is fully
proxy-aware using gce-proxy-setup.sh, and leveraging the
enhanced install_gpu_driver.sh from Part 2.

In Part 4, we’ll bring it all together, deploying a Dataproc cluster
using this custom -tf image within the secure network, and
verifying the end-to-end functionality.

29 January, 2026 09:08AM by C.J. Collier

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Aquila Macedo (aquila)
  • Peter Blackman (peterb)
  • Kiran S Kunjumon (hacksk)
  • Ben Westover (bjw)

The following contributors were added as Debian Maintainers in the last two months:

  • Vladimir Petko
  • Antonin Delpeuch
  • Nadzeya Hutsko
  • Aryan Karamtoth
  • Carl Keinath
  • Richard Nelson

Congratulations!

29 January, 2026 09:00AM by Jean-Pierre Giraud

January 28, 2026

Antoine Beaupré

Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.

Qalculate is the best calculator that has ever been made.

I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.

This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer

On Debian, Qalculate's CLI interface can be installed with:

apt install qalc

Then you start it with the qalc command, and end up on a prompt:

anarcat@angela:~$ qalc
> 

Then it's a normal calculator:

anarcat@angela:~$ qalc
> 1+1

  1 + 1 = 2

> 1/7

  1 / 7 ≈ 0.1429

> pi

  pi ≈ 3.142

> 

There's a bunch of variables to control display, approximation, and so on:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates

I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):

> 1 megabit/s * 30 day to gibibyte 

  (1 megabit/second) × (30 days) ≈ 301.7 GiB

Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:

> 1GiB/(100 megabit/s)

  (1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s

Password entropy

To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries.

For example, an alphabetic 14-character password is:

> log2(26*2)*14

  log₂(26 × 2) × 14 ≈ 79.81

... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:

> log2(8k)*x = 80

  (log₂(8 × 000) × x) = 80 ≈

  x ≈ 6.170

... about 6 words, which gives you:

> log2(8k)*6

  log₂(8 × 1000) × 6 ≈ 77.79

78 bits of entropy.

Exchange rates

You can convert between currencies!

> 1 EUR to USD

  1 EUR ≈ 1.038 USD

Even fake ones!

> 1 BTC to USD

  1 BTC ≈ 96712 USD

This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:

It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y

As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.

The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions

Here are other neat conversions extracted from my history

> teaspoon to ml

  teaspoon = 5 mL

> tablespoon to ml

  tablespoon = 15 mL

> 1 cup to ml 

  1 cup ≈ 236.6 mL

> 6 L/100km to mpg

  (6 liters) / (100 kilometers) ≈ 39.20 mpg

> 100 kph to mph

  100 kph ≈ 62.14 mph

> (108km - 72km) / 110km/h

  ((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
  19 min + 38.18 s

Completion time estimates

This is a more involved example I often do.

Background

Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!).

(Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time

First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:

$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -

So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor.

If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable

This is optional, but for the sake of demonstration, let's save this as a variable:

> start="2025-02-07 15:50:25"

  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size

Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:

# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 

Strip off the weird dots in there, because that will confuse qalculate, which will count this as:

  2.968252503968 bytes ≈ 2.968 B

Or, essentially, three bytes. We actually transferred almost 3TB here:

  2968252503968 bytes ≈ 2.968 TB

So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:

Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs

(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes

Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.

> 2870444032 KiB

  2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB

  2870444032 kilobytes ≈ 2.870 TB

At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:

> 2870444032 KiB - 2870444032 kB

  (2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB

Anyways. Let's take 2968252503968 bytes as our current progress.

Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication

We have 3 out of four variables for our equation here, so we can already solve:

> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h

  ((actual − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))

  x ≈ 59.24 h

The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.

To break this down step by step, we could calculate how long it has taken so far:

> now-start

  now − start ≈ 23 h + 53 min + 6.762 s

> now-start to s

  now − start ≈ 85987 s

... and do the cross-multiplication manually, it's basically:

x/(now-start) = (total/current)

so:

x = (total/current) * (now-start)

or, in Qalc:

> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s

  ((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
  2 d + 11 h + 14 min + 38.81 s

It's interesting it gives us different units here! Not sure why.

Now and built-in variables

The now here is actually a built-in variable:

> now

  now ≈ "2025-02-08T22:25:25"

There is a bewildering list of such variables, for example:

> uptime

  uptime = 5 d + 6 h + 34 min + 12.11 s

> golden

  golden ≈ 1.618

> exact

  golden = (√(5) + 1) / 2

Computing dates

In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.

But I did that all in my head, we can ask more of Qalc yet!

Let's make another variable, for that total estimated time:

> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)

  save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1) ≈
  2 d + 11 h + 14 min + 38.22 s

And we can plug that into another formula with our start time to figure out when we'll be done!

> start+total

  start + total ≈ "2025-02-10T03:28:52"

> start+total-now

  start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s

> start+total-now to h

  start + total − now ≈ 35 h + 34 min + 32.01 s

That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.

But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.

I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality

Qalculate can:

  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!

I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.

But for daily use, Qalculate is just fantastic.

And it's pink! Use it!

Gotchas

There are a couple of things that get me with Qalc, unfortunately.

Decimals precision

I mentioned set precision above:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

Fractional displays

But sometimes, I want fractional displays (obviously not for π because it is irrational). For example, sometimes I work in inches, and this would look better as a fraction:

> 8973/12

  8973 / 12 = 747.75

The trick here is to change the fraction setting, from the qalc(1) manual:

       fractions, fr (-1* = auto, 0 = off, 1 = exact, 2 = on, 3 = mixed, 4 =
       long, 5 = dual, 1/n)
               Determines how rational numbers are displayed (e.g. 5/4 = 1 + 1/4
               =  1.25).  'long' removes limits on the size of the numerator and
               denonimator.

Normally, this should be set to auto, so if you've changed it, change it back:

set fractions auto

Then you get the nice mixed output:

> 8973/12

  8973 / 12 = 747 + 3/4 = 747.75

The dual setting is also nice:

> set fractions dual

  747.75 = 2991/4 = 747 + 3/4 = 747.75

Strangely, I couldn't figure out how to get the same output in the graphical interface. The closest menu item is Mode > Rational Number Form.

Further reading and installation

This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.

Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates

Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

28 January, 2026 06:38PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Part 2: Taming the Beast – Deep Dive into the Proxy-Aware GPU Initialization Action

Part
2: Taming the Beast – Deep Dive into the Proxy-Aware GPU Initialization
Action

In Part 1 of this series, we laid the network foundation for running
secure Dataproc clusters. Now, let’s zoom in on the core component
responsible for installing and configuring NVIDIA GPU drivers and the
associated ML stack in this restricted environment: the
install_gpu_driver.sh script from the LLC-Technologies-Collier/initialization-actions
repository (branch gpu-202601).

This isn’t just any installation script; it has been significantly
enhanced to handle the nuances of Secure Boot and to operate seamlessly
behind an HTTP/S proxy.

The
Challenge: Installing GPU Drivers Without Direct Internet

Our goal was to create a Dataproc custom image with NVIDIA GPU
drivers, sign the kernel modules for Secure Boot, and ensure the entire
process works seamlessly when the build VM and the eventual cluster
nodes have no direct internet access, relying solely on an HTTP/S proxy.
This involved:

  1. Proxy-Aware Build: Ensuring all build steps within
    the custom image creation process (package downloads, driver downloads,
    GPG keys, etc.) correctly use the customer’s proxy.
  2. Secure Boot Signing: Integrating kernel module
    signing using keys managed in GCP Secret Manager, especially when
    drivers are built from source.
  3. Conda Environment: Reliably and speedily installing
    a complex Conda environment with PyTorch, TensorFlow, Rapids, and other
    GPU-accelerated libraries through the proxy.
  4. Dataproc Integration: Making sure the custom image
    works correctly with Dataproc’s own startup, agent processes, and
    cluster-specific configurations like YARN.

The
Development Journey: Key Enhancements in
install_gpu_driver.sh

To address these challenges, the script incorporates several key
features:

  • Robust Proxy Handling (set_proxy
    function):

    • Challenge: Initial script versions had spotty proxy
      support. Many tools like apt, curl,
      gpg, and even gsutil failed in proxy-only
      environments.
    • Enhancements: The set_proxy function
      (also used in gce-proxy-setup.sh) was completely overhauled
      to parse various proxy metadata (http-proxy,
      https-proxy, proxy-uri,
      no-proxy). Critically, environment variables
      (HTTP_PROXY, HTTPS_PROXY,
      NO_PROXY) are now set before any network
      operations. NO_PROXY is carefully set to include
      .google.com and .googleapis.com to allow
      direct access to Google APIs via Private Google Access. System-wide
      trust stores (OS, Java, Conda) are updated with the proxy’s CA
      certificate if provided via http-proxy-pem-uri.
      gcloud, apt, dnf, and
      dirmngr are also configured to use the proxy.
  • Reliable GPG Key Fetching (import_gpg_keys
    function):

    • Challenge: Importing GPG keys for repositories
      often failed as keyservers use non-HTTP ports (e.g., 11371) blocked by
      firewalls, and gpg --recv-keys is not proxy-friendly.
    • Solution: A new import_gpg_keys
      function now fetches keys over HTTPS using curl, which
      respects the environment’s proxy settings. This replaced all direct
      gpg --recv-keys calls.
  • GCS Caching is King:
    • Challenge: Repeatedly downloading large files
      (drivers, CUDA, source code) through a proxy is slow and
      inefficient.
    • Solution: Implemented extensive GCS caching for
      NVIDIA drivers, CUDA runfiles, NVIDIA Open Kernel Module source
      tarballs, compiled kernel modules, and even packed Conda environments.
      Scripts now check a GCS bucket (dataproc-temp-bucket)
      before hitting the internet.
    • Impact: Dramatically speeds up subsequent runs and
      init action execution times on cluster nodes after the cache is
      warmed.
  • Conda Environment Stability & Speed:
    • Challenge: Large Conda environments are prone to
      solver conflicts and slow installation times.
    • Solution: Integrated Mamba for faster package
      solving. Refined package lists for better compatibility. Added logic to
      force-clean and rebuild the Conda environment cache on GCS and locally
      if inconsistencies are detected (e.g., driver installed but Conda env
      not fully set up).
  • Secure Boot & Kernel Module Signing:
    • Challenge: Custom-compiled kernel modules must be
      signed to load when Secure Boot is enabled.
    • Solution: The script integrates with GCP Secret
      Manager to fetch signing keys. The build_driver_from_github
      function now includes robust steps to compile, sign (using
      sign-file), install, and verify the signed modules.
  • Custom Image Workflow & Deferred Configuration:
    • Challenge: Cluster-specific settings (like YARN GPU
      configuration) should not be baked into the image.
    • Solution: The install_gpu_driver.sh
      script detects when it’s run during image creation
      (--metadata invocation-type=custom-images). In this mode,
      it defers cluster-specific setups to a systemd service
      (dataproc-gpu-config.service) that runs on the first boot
      of a cluster instance. This ensures that YARN and Spark configurations
      are applied in the context of the running cluster, not at image build
      time.

Conclusion of Part 2

The install_gpu_driver.sh initialization action is more
than just an installer; it’s a carefully crafted tool designed to handle
the complexities of secure, proxied environments. Its robust proxy
support, comprehensive GCS caching, refined Conda management, Secure
Boot signing capabilities, and awareness of the custom image build
lifecycle make it a critical enabler.

In Part 3, we’ll explore how the LLC-Technologies-Collier/custom-images
repository (branch proxy-exercise-2025-11) uses this
initialization action to build the complete, ready-to-deploy Secure Boot
GPU custom images.

28 January, 2026 10:45AM by C.J. Collier

Dataproc GPUs, Secure Boot, & Proxies

Part
1: Building a Secure Network Foundation for Dataproc with GPUs &
SWP

Welcome to the first post in our series on running GPU-accelerated
Dataproc workloads in secure, enterprise-grade environments. Many
organizations need to operate within VPCs that have no direct internet
egress, instead routing all traffic through a Secure Web Proxy (SWP).
Additionally, security mandates often require the use of Shielded VMs
with Secure Boot enabled. This series will show you how to meet these
requirements for your Dataproc GPU clusters.

In this post, we’ll focus on laying the network foundation using
tools from the LLC-Technologies-Collier/cloud-dataproc
repository (branch proxy-sync-2026-01).

The Challenge: Network
Isolation & Control

Before we can even think about custom images or GPU drivers, we need
a network environment that:

  1. Prevents direct internet access from Dataproc cluster nodes.
  2. Forces all egress traffic through a manageable and auditable
    SWP.
  3. Provides the necessary connectivity for Dataproc to function and for
    us to build images later.
  4. Supports Secure Boot for all VMs.

The Toolkit:
LLC-Technologies-Collier/cloud-dataproc

To make setting up and tearing down these complex network
environments repeatable and consistent, we’ve developed a set of bash
scripts within the gcloud directory of the
cloud-dataproc repository. These scripts handle the
creation of VPCs, subnets, firewall rules, service accounts, and the
Secure Web Proxy itself.

Key Script:
gcloud/bin/create-dpgce-private

This script is the cornerstone for creating the private, proxied
environment. It automates:

  • VPC and Subnet creation (for the cluster, SWP, and management).
  • Setup of Certificate Authority Service and Certificate Manager for
    SWP TLS interception.
  • Deployment of the SWP Gateway instance.
  • Configuration of a Gateway Security Policy to control egress.
  • Creation of necessary firewall rules.
  • Result: Cluster nodes in this VPC have NO default
    internet route and MUST use the SWP.

Configuration via env.json

We use a single env.json file to drive the
configuration. This file will also be used by the
custom-images scripts in Part 3. This env.json
should reside in your custom-images repository clone, and
you’ll symlink it into the cloud-dataproc/gcloud
directory.

Running the Setup:

# Assuming you have cloud-dataproc and custom-images cloned side-by-side
# And your env.json is in the custom-images root
cd cloud-dataproc/gcloud
# Symlink to the env.json in custom-images
ln -sf ../../custom-images/env.json env.json
# Run the creation script, but don't create a cluster yet
bash bin/create-dpgce-private --no-create-cluster
cd ../../custom-images

Node
Configuration: The Metadata Startup Script for Runtime

For the Dataproc cluster nodes to function correctly in this proxied
environment, they need to be configured to use the SWP on boot. We
achieve this using a GCE metadata startup script.

The script startup_script/gce-proxy-setup.sh (from the
custom-images repository) is designed to be run on each
cluster node at boot. It reads metadata like http-proxy and
http-proxy-pem-uri (which our cluster creation scripts in
Part 4 will pass) to configure the OS environment, package managers, and
other tools to use the SWP.

Upload this script to your GCS bucket:

# Run from the custom-images repository root
gsutil cp startup_script/gce-proxy-setup.sh gs://$(jq -r .BUCKET env.json)/custom-image-deps/

This script is essential for the runtime behavior of the
cluster nodes.

Conclusion of Part 1

With the cloud-dataproc scripts, we’ve laid the
groundwork by provisioning a secure VPC with controlled egress through
an SWP. We’ve also prepared the essential node-level proxy configuration
script (gce-proxy-setup.sh) in GCS, ready to be used by our
clusters.

Stay tuned for Part 2, where we’ll dive into the
install_gpu_driver.sh initialization action from the
LLC-Technologies-Collier/initialization-actions repository
(branch gpu-202601) and how it’s been adapted to install
all GPU-related software through the proxy during the image build
process.

28 January, 2026 10:37AM by C.J. Collier

Sven Hoexter

Decrypt TLS Connection with wireshark and curl

With TLS 1.3 more parts of the handshake got encrypted (e.g. the certificate), but sometimes it's still helpful to look at the complete handshake.

curl uses the somewhat standardized env variable for the key log file called SSLKEYLOGFILE, which is also supported by Firefox and Chrome. wireshark hides the setting in the UI behind Edit -> Preferences -> Protocols -> TLS -> (Pre)-Master-Secret log filename which is uncomfortable to reach. Looking up the config setting in the Advanced settings one can learn that it's called internally tls.keylog_file. Thus we can set it up with:

sudo wireshark -o "tls.keylog_file:/home/sven/curl.keylog"

SSLKEYLOGFILE=/home/sven/curl.keylog curl -v https://www.cloudflare.com/cdn-cgi/trace

Depending on the setup root might be unable to access the wayland session, that can be worked around by letting sudo keep the relevant env variables:

$ cat /etc/sudoers.d/wayland 
Defaults   env_keep += "XDG_RUNTIME_DIR"
Defaults   env_keep += "WAYLAND_DISPLAY"

Or setup wireshark properly and use the wireshark group to be able to dump traffic. Might require a sudo dpkg-reconfigure wireshark-common.

Regarding curl: In some situations it could be desirable to force a specific older TLS version for testing, which requires a minimal and maximal version. E.g. to force TLS 1.2 only:

curl -v --tlsv1.2 --tls-max 1.2 https://www.cloudflare.com/cdn-cgi/trace

28 January, 2026 10:32AM

January 27, 2026

Elana Hashman

A beginner's guide to improving your digital security

In 2017, I led a series of workshops aimed at teaching beginners a better understanding of encryption, how the internet works, and their digital security. Nearly a decade later, there is still a great need to share reliable resources and guides on improving these skills.

I have worked professionally in computer security one way or another for well over a decade, at many major technology companies and in many open source software projects. There are many inaccurate and unreliable resources out there on this subject, put together by well-meaning people without a background in security, which can lead to sharing misinformation, exaggeration and fearmongering.

I hope that I can offer you a trusted, curated list of high impact things that you can do right now, using whichever vetted guide you prefer. In addition, I also include how long it should take, why you should do each task, and any limitations.

This guide is aimed at improving your personal security, and does not apply to your work-owned devices. Always assume your company can monitor all of your messages and activities on work devices.

What can I do to improve my security right away?

I put together this list in order of effort, easiest tasks first. You should be able to complete many of the low effort tasks in a single hour. The medium to high effort tasks are very much worth doing, but may take you a few days or even weeks to complete them.

Low effort (<15 minutes)

Upgrade your software to the latest versions

Why? I don't know anyone who hasn't complained about software updates breaking features, introducing bugs, and causing headaches. If it ain't broke, why upgrade, right? Well, alongside all of those annoying bugs and breaking changes, software updates also include security fixes, which will protect your device from being exploited by bad actors. Security issues can be found in software at any time, even software that's been available for many years and thought to be secure. You want to install these as soon as they are available.

Recommendation: Turn on automatic upgrades and always keep your devices as up-to-date as possible. If you have some software you know will not work if you upgrade it, at least be sure to upgrade your laptop and phone operating system (iOS, Android, Windows, etc.) and web browser (Chrome, Safari, Firefox, etc.). Do not use devices that do not receive security support (e.g. old Android or iPhones).

Guides:

Limitations: This will prevent someone from exploiting known security issues on your devices, but it won't help if your device was already compromised. If this is a concern, doing a factory reset, upgrade, and turning on automatic upgrades may help. This also won't protect against all types of attacks, but it is a necessary foundation.

Use Signal

Why? Signal is a trusted, vetted, secure messaging application that allows you to send end-to-end encrypted messages and make video/phone calls. This means that only you and your intended recipient can decrypt the messages and someone cannot intercept and read your messages, in contrast to texting (SMS) and other insecure forms of messaging. Other applications advertise themselves as end-to-end encrypted, but Signal provides the strongest protections.

Recommendation: I recommend installing the Signal app and using it! My mom loves that she can video call me on Wi-Fi on my Android phone. It also supports group chats. I use it as a secure alternative to texting (SMS) and other chat platforms. I also like Signal's "disappearing messages" feature which I enable by default because it automatically deletes messages after a certain period of time. This avoids your messages taking up too much storage.

Guides:

Limitations: Signal is only able to protect your messages in transit. If someone has access to your phone or the phone of the person you sent messages to, they will still be able to read them. As a rule of thumb, if you don't want someone to read something, don't write it down! Meet in person or make an encrypted phone call where you will not be overheard. If you are talking to someone you don't know, assume your messages are as public as posting on social media.

Set passwords and turn on device encryption

Why? Passwords ensure that someone else can't unlock your device without your consent or knowledge. They also are required to turn on device encryption, which protects your information on your device from being accessed when it is locked. Biometric (fingerprint or face ID) locking provides some privacy, but your fingerprint or face ID can be used against your wishes, whereas if you are the only person who knows your password, only you can use it.

Recommendation: Always set passwords and have device encryption enabled in order to protect your personal privacy. It may be convenient to allow kids or family members access to an unlocked device, but anyone else can access it, too! Use strong passwords that cannot be guessed—avoid using names, birthdays, phone numbers, addresses, or other public information. Using a password manager will make creating and managing passwords even easier. Disable biometric unlock, or at least know how to disable it. Most devices will enable disk encryption by default, but you should double-check.

Guides:

Limitations: If your device is unlocked, the password and encryption will provide no protections; the device must be locked for this to protect your privacy. It is possible, though unlikely, for someone to gain remote access to your device (for example through malware or stalkerware), which would bypass these protections. Some forensic tools are also sophisticated enough to work with physical access to a device that is turned on and locked, but not a device that is turned off/freshly powered on and encrypted. If you lose your password or disk encryption key, you may lose access to your device. For this reason, Windows and Apple laptops can make a cloud backup of your disk encryption key. However, a cloud backup can potentially be disclosed to law enforcement.

Install an ad blocker

Why? Online ad networks are often exploited to spread malware to unsuspecting visitors. If you've ever visited a regular website and suddenly seen an urgent, flashing pop-up claiming your device was hacked, it is often due to a bad ad. Blocking ads provides an additional layer of protection against these kinds of attacks.

Recommendation: I recommend everyone uses an ad blocker at all times. Not only are ads annoying and disruptive, but they can even result in your devices being compromised!

Guides:

Limitations: Sometimes the use of ad blockers can break functionality on websites, which can be annoying, but you can temporarily disable them to fix the problem. These may not be able to block all ads or all tracking, but they make browsing the web much more pleasant and lower risk! Some people might also be concerned that blocking ads might impact the revenue of their favourite websites or creators. In this case, I recommend either donating directly or sharing the site with a wider audience, but keep using the ad blocker for your safety.

Enable HTTPS-Only Mode

Why? The "S" in "HTTPS" stands for "secure". This feature, which can be enabled on your web browser, ensures that every time you visit a website, your connection is always end-to-end encrypted (just like when you use Signal!) This ensures that someone can't intercept what you search for, what pages on websites you visit, and any information you or the website share such as your banking details.

Recommendation: I recommend enabling this for everyone, though with improvements in web browser security and adoption of HTTPS over the years, your devices will often do this by default! There is a small risk you will encounter some websites that do not support HTTPS, usually older sites.

Guides:

Limitations: HTTPS protects the information on your connection to a website. It does not hide or protect the fact that you visited that website, only the information you accessed. If the website is malicious, HTTPS does not provide any protection. In certain settings, like when you use a work-managed computer that was set up for you, it can still be possible for your IT Department to see what you are browsing, even over an HTTPS connection, because they have administrator access to your computer and the network.

Medium to high effort (1+ hours)

These tasks require more effort but are worth the investment.

Set up a password manager

Why? It is not possible for a person to remember a unique password for every single website and app that they use. I have, as of writing, 556 passwords stored in my password manager. Password managers do three important things very well:

  1. They generate secure passwords with ease. You don't need to worry about getting your digits and special characters just right; the app will do it for you, and generate long, secure passwords.
  2. They remember all your passwords for you, and you just need to remember one password to access all of them. The most common reason people's accounts get hacked online is because they used the same password across multiple websites, and one of the websites had all their passwords leaked. When you use a unique password on every website, it doesn't matter if your password gets leaked!
  3. They autofill passwords based on the website you're visiting. This is important because it helps prevent you from getting phished. If you're tricked into visiting an evil lookalike site, your password manager will refuse to fill the password.

Recommendation: These benefits are extremely important, and setting up a password manager is often one of the most impactful things you can do for your digital security. However, they take time to get used to, and migrating all of your passwords into the app (and immediately changing them!) can take a few minutes at a time... over weeks. I recommend you prioritize the most important sites, such as your email accounts, banking/financial sites, and cellphone provider. This process will feel like a lot of work, but you will get to enjoy the benefits of never having to remember new passwords and the autofill functionality for websites. My recommended password manager is 1Password, but it stores passwords in the cloud and costs money. There are some good free options as well if cost is a concern. You can also use web browser- or OS-based password managers, but I do not prefer these.

Guides:

Limitations: Many people are concerned about the risk of using a password manager causing all of their passwords to be compromised. For this reason, it's very important to use a vetted, reputable password manager that has passed audits, such as 1Password or Bitwarden. It is also extremely important to choose a strong password to unlock your password manager. 1Password makes this easier by generating a secret to strengthen your unlock password, but I recommend using a long, memorable password in any case. Another risk is that if you forget your password manager's password, you will lose access to all your passwords. This is why I recommend 1Password, which has you set up an Emergency Kit to recover access to your account.

Set up two-factor authentication (2FA) for your accounts

Why? If your password is compromised in a website leak or due to a phishing attack, two-factor authentication will require a second piece of information to log in and potentially thwart the intruder. This provides you with an extra layer of security on your accounts.

Recommendation: You don't necessarily need to enable 2FA on every account, but prioritize enabling it on your most important accounts (email, banking, cellphone, etc.) There are typically a few different kinds: email-based (which is why your email account's security is so important), text message or SMS-based (which is why your cell phone account's security is so important), app-based, and hardware token-based. Email and text message 2FA are fine for most accounts. You may want to enable app- or hardware token-based 2FA for your most sensitive accounts.

Guides:

Limitations: The major limitation is that if you lose access to 2FA, you can be locked out of an account. This can happen if you're travelling abroad and can't access your usual cellphone number, if you break your phone and you don't have a backup of your authenticator app, or if you lose your hardware-based token. For this reason, many websites will provide you with "backup tokens"—you can print them out and store them in a secure location or use your password manager. I also recommend if you use an app, you choose one that will allow you to make secure backups, such as Ente. You are also limited by the types of 2FA a website supports; many don't support app- or hardware token-based 2FA.

Remove your information from data brokers

Why? This is a problem that mostly affects people in the US. It surprises many people that information from their credit reports and other public records is scraped and available (for free or at a low cost) online through "data broker" websites. I have shocked friends who didn't believe this was an issue by searching for their full names and within 5 minutes being able to show them their birthday, home address, and phone number. This is a serious privacy problem!

Recommendation: Opt out of any and all data broker websites to remove this information from the internet. This is especially important if you are at risk of being stalked or harassed.

Guides:

Limitations: It can take time for your information to be removed once you opt out, and unfortunately search engines may have cached your information for a while longer. This is also not a one-and-done process. New data brokers are constantly popping up and some may not properly honour your opt out, so you will need to check on a regular basis (perhaps once or twice a year) to make sure your data has been properly scrubbed. This also cannot prevent someone from directly searching public records to find your information, but that requires much more effort.

"Recommended security measures" I think beginners should avoid

We've covered a lot of tasks you should do, but I also think it's important to cover what not to do. I see many of these tools recommended to security beginners, and I think that's a mistake. For each tool, I will explain my reasoning around why I don't think you should use it, and the scenarios in which it might make sense to use.

"Secure email"

What is it? Many email providers, such as Proton Mail, advertise themselves as providing secure email. They are often recommended as a "more secure" alternative to typical email providers such as GMail.

What's the problem? Email is fundamentally insecure by design. The email specification (RFC-3207) states that any publicly available email server MUST NOT require the use of end-to-end encryption in transit. Email providers can of course provide additional security by encrypting their copies of your email, and providing you access to your email by HTTPS, but the messages themselves can always be sent without encryption. Some platforms such as Proton Mail advertise end-to-end encrypted emails so long as you email another Proton user. This is not truly email, but their own internal encrypted messaging platform that follows the email format.

What should I do instead? Use Signal to send encrypted messages. NEVER assume the contents of an email are secure.

Who should use it? I don't believe there are any major advantages to using a service such as this one. Even if you pay for a more "secure" email provider, the majority of your emails will still be delivered to people who don't. Additionally, while I don't use or necessarily recommend their service, Google offers an Advanced Protection Program for people who may be targeted by state-level actors.

PGP/GPG Encryption

What is it? PGP ("Pretty Good Privacy") and GPG ("GNU Privacy Guard") are encryption and cryptographic signing software. They are often recommended to encrypt messages or email.

What's the problem? GPG is decades old and its usability has always been terrible. It is extremely easy to accidentally send a message that you thought was encrypted without encryption! The problems with PGP/GPG have been extensively documented.

What should I do instead? Use Signal to send encrypted messages. Again, NEVER use email for sensitive information.

Who should use it? Software developers who contribute to projects where there is a requirement to use GPG should continue to use it until an adequate alternative is available. Everyone else should live their lives in PGP-free bliss.

Installing a "secure" operating system (OS) on your phone

What is it? There are a number of self-installed operating systems for Android phones, such as GrapheneOS, that advertise as being "more secure" than using the version of the Android operating system provided by your phone manufacturer. They often remove core Google APIs and services to allow you to "de-Google" your phone.

What's the problem? These projects are relatively niche, and don't have nearly enough resourcing to be able to respond to the high levels of security pressure Android experiences (such as against the forensic tools I mentioned earlier). You may suddenly lose security support with no notice, as with CalyxOS. You need a high level of technical know-how and a lot of spare time to maintain your device with a custom operating system, which is not a reasonable expectation for the average person. By stripping all Google APIs such as Google Play Services, some useful apps can no longer function. And some law enforcement organizations have gone as far as accusing people who install GrapheneOS on Pixel phones to be engaging in criminal activity.

What should I do instead? For the best security on an Android device, use a phone manufactured by Google or Samsung (smaller manufacturers are more unreliable), or consider buying an iPhone. Make sure your device is receiving security updates and up-to-date.

Who should use it? These projects are great for tech enthusiasts who are interested in contributing to and developing them further. They can be used to give new life to old phones that are not receiving security or software updates. They are also great for people with an interest in free and open source software and digital autonomy. But these tools are not a good choice for a general audience, nor do they provide more practical security than using an up-to-date Google or Samsung Android phone.

Virtual Private Network (VPN) Services

What is it? A virtual private network or VPN service can provide you with a secure tunnel from your device to the location that the VPN operates. This means that if I am using my phone in Seattle connected to a VPN in Amsterdam, if I access a website, it appears to the website that my phone is located in Amsterdam.

What's the problem? VPN services are frequently advertised as providing security or protection from nefarious bad actors, or helping protect your privacy. These benefits are often far overstated, and there are predatory VPN providers that can actually be harmful. It costs money and resources to provide a VPN, so free VPN services are especially suspect. When you use a VPN, the VPN provider knows the websites you are visiting in order to provide you with the service. Free VPN providers may sell this data in order to cover the cost of providing the service, leaving you with less security and privacy. The average person does not have the knowledge to be able to determine if a VPN service is trustworthy or not. VPNs also don't provide any additional encryption benefits if you are already using HTTPS. They may provide a small amount of privacy benefit if you are connected to an untrusted network with an attacker.

What should I do instead? Always use HTTPS to access websites. Don't connect to untrusted internet providers—for example, use cellphone network data instead of a sketchy Wi-Fi access point. Your local neighbourhood coffee shop is probably fine.

Who should use it? There are three main use cases for VPNs. The first is to bypass geographic restrictions. A VPN will cause all of your web traffic to appear to be coming from another location. If you live in an area that has local internet censorship policies, you can use a VPN to access the internet from a location that lacks such policies. The second is if you know your internet service provider is actively hostile or malicious. A trusted VPN will protect the visibility of all your traffic, including which websites you visit, from your internet service provider, and the only thing they will be able to see is that you are accessing a VPN. The third use case is to access a network that isn't connected to the public internet, such as a corporate intranet. I strongly discourage the use of VPNs for "general-purpose security."

Tor

What is it? Tor, "The Onion Router", is a free and open source software project that provides anonymous networking. Unlike with a VPN, where the VPN provider knows who you are and what websites you are requesting, Tor's architecture makes it extremely difficult to determine who sent a request.

What's the problem? Tor is difficult to set up properly; similar to PGP-encrypted email, it is possible to accidentally not be connected to Tor and not know the difference. This usability has improved over the years, but Tor is still not a good tool for beginners to use. Due to the way Tor works, it is also extremely slow. If you have used cable or fiber internet, get ready to go back to dialup speeds. Tor also doesn't provide perfect privacy and without a strong understanding of its limitations, it can be possible to deanonymize someone despite using it. Additionally, many websites are able to detect connections from the Tor network and block them.

What should I do instead? If you want to use Tor to bypass censorship, it is often better to use a trusted VPN provider, particularly if you need high bandwidth (e.g. for streaming). If you want to use Tor to access a website anonymously, Tor itself might not be enough to protect you. For example, if you need to provide an email address or personal information, you can decline to provide accurate information and use a masked email address. A friend of mine once used the alias "Nunya Biznes" 🥸

Who should use it? Tor should only be used by people who are experienced users of security tools and understand its strengths and limitations. Tor also is best used on a purpose-built system, such as Tor Browser or Freedom of the Press Foundation's SecureDrop.

I want to learn more!

I hope you've found this guide to be a useful starting point. I always welcome folks reaching out to me with questions, though I might take a little bit of time to respond. You can always email me.

If there's enough interest, I might cover the following topics in a future post:

  • Threat modelling, which you can get started with by reading the EFF's or VCW's guides
  • Browser addons for privacy, which Consumer Reports has a tip for
  • Secure DNS, which you can read more about here

Stay safe out there! 🔒

27 January, 2026 02:00AM by Elana Hashman

January 24, 2026

hackergotchi for Gunnar Wolf

Gunnar Wolf

Finally some light for those who care about Debian on the Raspberry Pi

Finally, some light at the end of the tunnel!

As I have said in this blog and elsewhere, after putting quite a bit of work into generating the Debian Raspberry Pi images between late 2018 and 2023, I had to recognize I don’t have the time and energy to properly care for it.

I even registered a GSoC project for it. I mentored Kurva Prashanth, who did good work on the vmdb2 scripts we use for the image generation — but in the end, was unable to push them to be built in Debian infrastructure. Maybe a different approach was needed! While I adopted the images as they were conceived by Michael Stapelberg, sometimes it’s easier to start from scratch and build a fresh approach.

So, I’m not yet pointing at a stable, proven release, but to a good promise. And I hope I’m not being pushy by making this public: in the #debian-raspberrypi channel, waldi has shared the images he has created with the Debian Cloud Team’s infrastructure.

So, right now, the images built so far support Raspberry Pi families 4 and 5 (notably, not the 500 computer I have, due to a missing Device Tree, but I’ll try to help figure that bit out… Anyway, p400/500/500+ systems are not that usual). Work is underway to get the 3B+ to boot (some hackery is needed, as it only understands MBR partition schemes, so creating a hybrid image seems to be needed).

Debian Cloud images for Raspberries

Sadly, I don’t think the effort will be extended to cover older, 32-bit-only systems (RPi 0, 1 and 2).

Anyway, as this effort stabilizes, I will phase out my (stale!) work on raspi.debian.net, and will redirect it to point at the new images.

Comments

Andrea Pappacoda tachi@d.o 2026-01-26 17:39:14 GMT+1

Are there any particular caveats compared to using the regular Raspberry Pi OS?

Are they documented anywhere?

Gunnar Wolf gwolf.blog@gwolf.org 2026-01-26 11:02:29 GMT-6

Well, the Raspberry Pi OS includes quite a bit of software that’s not packaged in Debian for various reasons — some of it because it’s non-free demo-ware, some of it because it’s RPiOS-specific configuration, some of it… I don’t care, I like running Debian wherever possible 😉

Andrea Pappacoda tachi@d.o 2026-01-26 18:20:24 GMT+1

Thanks for the reply! Yeah, sorry, I should’ve been more specific. I also just care about the Debian part. But: are there any hardware issues or unsupported stuff, like booting from an SSD (which I’m currently doing)?

Gunnar Wolf gwolf.blog@gwolf.org 2026-01-26 12:16:29 GMT-6

That’s… beyond my knowledge 😉 Although I can tell you that:

  • Raspberry Pi OS has hardware support as soon as their new boards hit the market. The ability to even boot a board can take over a year for the mainline Linux kernel (at least, it has, both in the cases of the 4 and the 5 families).

  • Also, sometimes some bits of hardware are not discovered by the Linux kernels even if the general family boots because they are not declared in the right place of the Device Tree (i.e. the wireless network interface in the 02W is in a different address than in the 3B+, or the 500 does not fully boot while the 5B now does). Usually it is a matter of “justâ€� declaring stuff in the right place, but it’s not a skill many of us have.

  • Also, many RPi “hatsâ€� ship with their own Device Tree overlays, and they cannot always be loaded on top of mainline kernels.

Andrea Pappacoda tachi@d.o 2026-01-26 19:31:55 GMT+1

That’s… beyond my knowledge 😉 Although I can tell you that:

Raspberry Pi OS has hardware support as soon as their new boards hit the market. The ability to even boot a board can take over a year for the mainline Linux kernel (at least, it has, both in the cases of the 4 and the 5 families).

Yeah, unfortunately I’m aware of that… I’ve also been trying to boot OpenBSD on my rpi5 out of curiosity, but been blocked by my somewhat unusual setup involving an NVMe SSD as the boot drive :/

Also, sometimes some bits of hardware are not discovered by the Linux kernels even if the general family boots because they are not declared in the right place of the Device Tree (i.e. the wireless network interface in the 02W is in a different address than in the 3B+, or the 500 does not fully boot while the 5B now does). Usually it is a matter of “just� declaring stuff in the right place, but it’s not a skill many of us have.

At some point in my life I had started reading a bit about device trees and stuff, but got distracted by other stuff before I could develop any familiarity with it. So I don’t have the skills either :)

Also, many RPi “hats� ship with their own Device Tree overlays, and they cannot always be loaded on top of mainline kernels.

I’m definitely not happy to hear this!

Guess I’ll have to try, and maybe report back once some page for these new builds materializes.

24 January, 2026 04:24PM

January 23, 2026

Nazi.Compare

Judgment: French army vanquishes German FSFE on Hitler's birthday, Microsoft contract dispute (1716711)

(1716711, 1804171) Free Software Foundation Europe e.V. vs. Le ministère des armées, Tribunal Administratif de Melun.

Hitler's birthday, 20 April, was a special occasion every year under the fascist dictatorship, on par with the King's Birthday in the Commonwealth. Incidentally, the British monarchy are really Germans.

We previously considered the fact that Debianists elected a German leader on Hitler's birthday and then they did exactly the same thing again the following year.

In 2017, the German FSFE misfits began a legal case against the French military. FSFE misfits disputed the French military's decision to sign a contract with Microsoft.

The case was eventually resolved in 2021 with a judgment against the Germans / FSFE misfits. For those who failed to notice at the time, the tribunal handed down the judgment on Hitler's birthday:

FSFE, judgment, France, military, microsoft

 

FSFE, judgment, France, military, microsoft

 

While the full name of the FSFE is Free Software Foundation Europe, internal statistics and mailing list traffic levels demonstrate that the majority of these people are Germans and they do not represent Europe at large, they only represent themselves.

Throughout the legal procedure and trial, an employee of the French government, Dr Amandine Jambert from CNIL was part of the internal mailing lists used for discussions at the FSFE. Was this a conflict of interest when the FSFE was taking legal action against her own employer?

In the email below, Jambert does not state her full name, she only uses the sockpuppet identity "Cryptie" and she does not have any email signature disclosing her position within an agency of the French state.

The FSFE misfits claim to be a peak body for Free Software activism in Europe. They claim to comply with Transparency International guidelines. As the saying goes, do as we say, not as we do.

In 2023, when Jambert resigned from the FSFE misfits, she finally admitted she has a conflict of interest, implicitly admitting that she lied about harassment to avoid facing a basic ethical question.

Subject: 	Re: Fwd: [April - Atelier] Open Bar : Une action en justice dans les tuyaux et DSI des Armées pro-microsoft
Date: 	Wed, 30 Aug 2017 13:15:58 +0200
From: 	Cryptie <cryptie@fsfe.org>
To: 	team@lists.fsfe.org, Hugo Roy <hugo@hugoroy.eu>, france@lists.fsfe.org



Hi all,

I think it would be a good idea to tell them what we did/ plan to do as, in France, they are those that could help us if we need.

Best,
Cryptie

Le 30 août 2017 13:12:00 GMT+02:00, Hugo Roy <hugo@hugoroy.eu> a écrit :



    Hi all,

    Important info: the company (Nexedi) will be represented by a lawyer
    that I know well, because he sucks.

    I think this makes it paramount that FSFE also sues, to ensure that
    things are done right and limit any damage that this guy may do.

    Matze: maybe you remember, I talked about this guy and how he miserably
    failed a lawsuit (on a similar topic against Microsoft) last year.


    On 2017-08-30 12:43, Hugo Roy wrote:

        Hi there,

        You will find attached an April internal mailing list relating
        to the
        Microsoft - French Ministry of Defense contract, about which
        FSFE sent
        a request for information (in the potentiality of a lawsuit).

        A company (Nexedi) is, apparently, going to sue over this too.

        Thinking about replying to April that FSFE is also thinking about a
        lawsuit and to keep April updated (and that we are happy to work
        with
        them, as usual). Thoughts?

        Best,
        Hugo

    ------------------------------------------------------------------------

    Team mailing list
    Team@lists.fsfe.org
    https://lists.fsfe.org/mailman/listinfo/team


-- 
Envoyé de mon appareil Android avec K-9 Mail. Veuillez excuser ma brièveté.

Remember, in May 2018, the FSFE staff had an Extraordinary General Meeting without the developers present and they voted to remove the elections from the constitution of the association. They have not had a full election since 2017.

In July 2018, a few weeks after removing elections from the constitution, the German FSFE misfits went to RMLL in Strasbourg and they recorded a video of Jambert talking about democracy in French.

Amandine Jambert, cryptie, FSFE, balloons, RMLL, EDPB, CNIL, Strasbourg

 

Amandine Jambert, cryptie, FSFE, balloons, RMLL, EDPB, CNIL, Strasbourg

 

Read more about Nazi comparisons.

23 January, 2026 06:00PM

Swiss JuristGate

Judge Richard Oulevey (Grandcour Choeur, Tribunal Vaud) & Debian shaming abuse victims and witnesses

In 2021, a court in Poland convicted a paedophile and sent him to prison. The paedophile was described as a regional activist for the Civic Platform (PO) political party. When convicting the paedophile, the court made orders that his identity and conviction could not be published as it would compromise the privacy of the victims. The victims were the children of a politician from the same party as the offender.

Radio Szczecin is a public broadcaster under control of the right-wing party ruling in Poland. In December 2022, Radio Szczecin made a broadcast alleging that a cover-up had taken place. In the broadcast, they informed the audience that the victims were "the children of a well-known lawmaker", the very fact the judge had sought to protect.

After that, it wasn't long before people were able to identify the victims.

A few weeks later, one of the victims committed suicide.

In 2017, when I discovered evidence that diversity funding from open source software organizations had enabled wrongdoing in Albania, I told some of the women that I was watching these matters carefully. To show some empathy, I mentioned to the women the fact one of my cousins had received the choir scholarship at St Kevin's College, Toorak at the same time the late Cardinal Pell was the Archbishop of Melbourne. The choir scholarships are worth approximately three hundred thousand Australian dollars. Like the Outreachy internships, the scholarship has significant financial value and being a part of the St Kevin's community opens doors for people.

This is the point where the legal and ethical competance diverged significantly.

Cardinal Pell was convicted on 11 December 2018 and a few days later, on Christmas Eve, rogue Debianists used the debian-private whisper network to spread rumours about abuse. In fact, I've never even specified whether one of my cousins is the same person who made the complaint. There were hundreds of boys in the choir over the years.

By creating hysteria, they were following in the footsteps of Radio Szczecin. In the case of Debianism, it is even worse. One of the people spreading rumours about abuse, Joerg Jaspert, was elected as a parent representative for Dalbergschule school council in Fulda, Germany. If he is unable to handle private information about children then he is totally unsuitable for that position in a school.

The hysteria has identified a number of possible abuse victims in open source software communities and as long as they persist, they are forcing people to point to possible evidence that may identify people. They already forced public attention on the sixteen-year-old victims from Albania. Instead of learning from that mistake, they demanded more public shaming. Here is one of their legal documents where they demand a public humiliation of my family:

Walder Wyss, abuse, privacy, Switzerland

 

Yet look at the irony. The judge assigned to manage the case, Richard Oulevey, is also the president of a choir. The Grandcour Men's Choir, based in the Canton of Vaud, celebrated their 125th anniversary recently.

Richard Oulevey, Grandcour Men's Choir

 

Richard Oulevey, Grandcour Men's Choir

 

After the Pell prosecution, media reports went to great lengths to conceal the identities of choir members. Photos were published with all the faces obfuscated:

Richard Oulevey, Cardinal George Pell, Grandcour Men's Choir, St Patrick's, St Kevin's, choir

 

In Poland, the judge realized that the prosecution could lead to the identification of abuse victims and he ordered total secrecy about the identity of the offender. Exactly the same rules are used to impose secrecy in cases of incest and cases involving small villages, whether they occur in Poland, in Australia or in Switzerland. A recent example was the prosecution of a bakery owner in South Australia. During the 1980s, the baker/father recruited underage girls to live and work in a cult-like arrangement leading to pregnancies. Three babies were born within a period of seven months.

The case created enormous media interest but none of the news reports are able to publish the name of the village, the name of the business or the names of the people convicted. In other words, they are hiding the name of the paedophile to protect the identities of his thirteen children.

We have seen echoes of this behaviour in the open source software communities. The cases in Albania were not the only ones.

Look at how Debianists and the Outreachy program offer financial incentives to young transgender people, even in a voluntary environment where nobody else is allowed to receive payment.

Its time to hold a mirror up to the rogue Debianists themselves. Look at the case of Pauline Pommeret, who came from an elite high school to be groomed/indoctrinated in the Debian love nest at ENS Cachan / CRANS.

Pommeret eventually came and asked for Outreachy money, which is approximately double the amount of money offered to an ordinary male intern of the same age and experience level.

If judges in other jurisdictions have gone to great lengths to protect the privacy of people in proximity to abuse cases, why do the jurists and judges in Switzerland & Debian want to publicly identify and humiliate people?

Another one of the protagonists in the case is Axel Beckert from the university ETH Zurich university. How can these people be competant to handle private complaints from students when their first reaction is to take out their mobile phone and make a public statement about everything?

Debianists spent over $120,000 on legal fees to censor blogs like this but getting a gay judge with a transgender sister would be priceless. Or just more conflicts of interest.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Please see the rest of the JuristGate reports.

23 January, 2026 06:00PM

Did judge with transgender sister & Debian conflict of interest help cover-up a death?

Most of the correspondance concerning the Debianism vendetta in the canton of Vaud was signed by Judge Richard Oulevey of the PLR Liberal Radical Party.

Did he have a conflict of interest?

One of the key issues in the Debianism dispute was the attempt to trick me to mentor Nicolas Dandrimont's partner. It appears that the partner of Dandrimont may be a transgender and the sister of Judge Oulevey is also a transgender, Zoé Frédérique Oulevey. By coincidence, Zoé is head of R&I Intellectual Property at Richemont.

The presence of multiple transgenders is significant. These people don't simply dress up as women. They want to know what it feels like to live like a woman and for some of them, they crave manipulating the protective instincts in other men. Just as they go to great effort with cosmetic surgery, they also spend a lot of effort practicing the victim routine. There are whole associations and web sites dedicated to the victimhood. They focus on how to be believed rather than how to be honest. It is hard and sometimes impossible to tell when these people are reporting a real case of harassment and when they are just role-playing victimhood.

One of the cyberbullies who started the legal harassment, which is called "abuse of process" if you'll excuse the pun, is Axel Beckert, who is the gay partner of a former colleague. It appears that Judge Richard Oulevey does not have a spouse. For those who know the extent to which LGBT vendettas have hurt healthy people around Debian, it is a disturbing thought but also quite rude to speculate without any other evidence that Debian somehow spent all that money to get a gay judge. The judge has not revealed anything about his love life or the extent to which all these people are mingling bodily fluids on the Swiss LGBT scene.

Axel Beckert, XTaran, Zoé Frédérique Oulevey, Pauline Pommeret, Maria Climent Pommeret, Alice Climent Pommeret, Alice Climent, Richard Oulevey, Martin Ebnoether, venty

 

In Australia, Melbourne newspaper The Age published a detailed report about groups like this who work together to humiliate their victims.

How a Melbourne seminary became the breeding ground for paedophile rings

Corpus Christi was where sexually repressed men could “act out” with each other, living double lives, then transfer their attentions to the most innocent in their flocks.

...

Fells alleges that he was the victim of a group of priests who formed part of a network of paedophiles coalescing around Corpus Christi in the mid-1970s. His first abuser was St Peter’s Clayton parish priest Ronald Pickering, who Fells claims molested him after a Sunday mass in the parish presbytery. A known paedophile, Pickering fled to England in 1993 after the Melbourne Archdiocese informed him about a victim’s complaint.

...

“Pickering got me first, then Vears, and then Ryan,” alleges Fells, who tried to take his own life after the seminary incident and still suffers from post-traumatic stress.

“They got me all right.”

Amnesty International published a report about the Swiss police using violence to "arrest" and sexually abuse Trevor Kitchen for his reports about financial corruption.

Trevor Kitchen, a 41-year-old British citizen resident in Switzerland, was arrested by police in Chiasso (canton of Ticino) on the morning of 25 December 1992 in connection with offences of defamation and insults against private individuals. In a letter addressed to the Head of the Federal Department of Justice and Police in Berne and to the Tribunal in Bellinzona (Ticino) on 3 June 1993 he alleged that two police officers arrested him in a bar in Chiasso and, after handcuffing him, accompanied him to their car in the street outside. They then bent him over the car and hit him around the head approximately seven times and carried out a body search during which his testicles were squeezed. He claimed he was then punched hard between the shoulder blades several times. He said he offered no resistance during the arrest.

He was then taken to a police station in Chiasso where he was questioned in Italian (a language he does not understand) and stated that during the questioning "The same policeman that arrested me came into the office to shout at me and hit me once again around the head. Another policeman forced me to remove all of my clothes. I was afraid that they would use physical force again; they continued to shout at me. The one policeman was pulling at my clothes and took my trouser belt off and removed my shoe laces. Now I stood in the middle of an office completely naked (for 10 minutes) with the door wide open and three policemen staring at me, one of the policemen put on a pair of rubber surgical gloves and instructed me to crouch into a position so that he could insert his fingers into my anus, I refused and they all became angry and started shouting and demonstrating to me the position which they wanted me to take, laughing, all were laughing, these police were having a good time. They pointed at my penis, making jokes, hurling abuse and insults at me, whilst I stood completely still and naked. Finally, when they finished laughing, one of the policemen threw my clothes onto the floor in front of me. I got dressed."

He was transferred to prison some hours later and in his letter claimed that during the night he started to experience severe pains in his chest, back and arms. He asked a prison guard if he could see a doctor but the request was refused and he claimed the guard kicked him. He was released on 30 December 1993. Medical reports indicated that since his release he had been experiencing recurrent pain in the area of his chest and right shoulder and had been receiving physiotherapy for an injury to the upper thoracic spine and his right shoulder girdle.

Debianist financial disclosures show they spent over $120,000 on legal fees. One overworked and unpaid volunteer, Abraham Raji, died at DebConf23 when they asked him to contribute his own money to the day trip.

Debian legal fees

 

Yet their Judge Oulevey, who got some money, admits trying to create an invalid judgment and then having to annul it:

Debian trademark, Richard Oulevey

 

Dare we say a group of gays, trannies and greens fathered a non-binary judgment? Or they simply fucked up?

Axel Beckert, XTaran, Zoé Frédérique Oulevey, Pauline Pommeret, Maria Climent Pommeret, Alice Climent Pommeret, Alice Climent, Richard Oulevey, Martin Ebnoether, venty

 

Fundamentally, while the LGBT community celebrates Pride, the Debianists celebrate shaming people. Look at their demand to publicly denounce my family after my father died:

Debian, denouncing people

 

Adolf Hitler did exactly the same thing. He insisted that every Jew had to wear a star on their chest, like a miniature judgment:

Israel, Jews, stars

 

One of the groundbreaking reports on this web site was the comparison of Debian and Swiss cover-ups to the barrister John Smyth QC in the Church of England. From a BBC report:

Smyth was confronted about his conduct after the report compiled by Rev Mark Ruston and Rev David Fletcher.

It found Smyth identified pupils from leading public schools including Winchester College and took them to his home near Winchester in Hampshire, where he carried out lashings with a garden cane in his shed.

Now look at the invalid document these German and Swiss nazis created together. 10 November is the anniversary of the Kristallnacht:

Debian, denouncing people

 

The report about John Smyth QC goes on:

Smyth is said to have subjected his victims to traumatic physical, sexual, psychological and spiritual attacks, permanently marking their lives.

If you look at the way the Debianists attacked my family, they inflicted this on us and sustained these attacks ever since the death of my father. That is a level of intrusion that puts some of the rogue Debianists right up there with the paedophiles.

By chance, Judge Oulevey's father also died, giving us a list of family members. We can see that Judge Oulevey does not have a wife. As well as having the transgender sister Zoé Frédérique Oulevey, he has a brother, Xavier Oulevey who is also a jurist in the canton of Vaud. Clearly, conflicts of interest will arise every time his brother brings a case to the tribunal.

Michel Oulevey, Raymonde Oulevey-Chenuz, Richard Oulevey, Zoé Frédérique Oulevey, Christine Mayencourt, Xavier Oulevey, Laurence Oulevey, Jacques Oulevey, Ulla Oulevey, Cuérel, Dr Ratnam, Prof. Dufour, Fondation Prérisa, Lucens, Grandcour

 

On 1 November 2023, I was granted citizenship of Switzerland in the Canton of Vaud. When I look at these dirty men attacking my family after my father died, I see the behaviour of animals, not humans. It makes me question what it means to be a citizen of Switzerland, because I can't lower myself to be an animal like these men.

John Pocock

 

Why do these dirty men go to all this trouble to humiliate my family and I? Adrian von Bidder-Senn died on our wedding day. They don't want my presence to remind them that they may have some responsibility for those deaths. They couldn't care less about pushing more people to suicide if that is the price of covering up the last suicides.

The Bishop Accountability web site has a page about the experiences of Catholic whistleblowers and it is remarkably similar to what we see in free software "family" cults today:

Many of the individuals profiled below have experienced retaliation and grief in some form – defamation, job loss, career derailment, ostracization, pressure by superiors to admit to mental illness, and in at least one case, suicide.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Please see the rest of the JuristGate reports.

23 January, 2026 06:00PM

January 22, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 2

In part 1, we discovered the problem of rewriting git history in the presence of nontrivial merges. Today, we'll discuss the workaround I chose.

As I previously mentioned, and as Julia Evans' excellent data model document explains, a git commit is just a snapshot of a tree (suitably deduplicated by means of content hashes), a commit message and a (possibly empty) set of parents. So fundamentally, we don't really need to mess with diffs; if we can make the changes we want directly to the tree (well, technically, make a new tree that looks like what we want, and a new commit using that tree), we're good. (Diffs in git are, generally, just git diff looking at two trees and trying to make sense of it. This has the unfortunate result that there is no solid way of representing a rename; there are heuristics, but if you rename a file and change it in the same commit, they may fail and stuff like git blame or git log may be broken, depending on flags. Gerrit doesn't even seem to understand a no-change copy.)

In earlier related cases, I've taken this to the extreme by simply hand-writing a commit using git commit-tree. Create exactly the state that you want by whatever means, commit it in some dummy commit and then use that commit's tree with some suitable commit message and parent(s); voila. But it doesn't help us with history; while we can fix up an older commit in exactly the way we'd like, we also need the latter commits to have our new fixed-up commit as parent.

Thus, enter git filter-branch. git filter-branch comes with a suitable set of warnings about eating your repository and being deprecated (I never really figured out its supposed replacement git filter-repo, so I won't talk much about it), but it's useful when all else fails.

In particular, git filter-branch allows you to do arbitrary changes to the tree of a series of commits, updating the parent commit IDs as rewrites happen. So if you can express your desired changes in a way that's better than “run the editor” (or if you're happy running the editor and making the same edit manually 300 times!), you can just run that command over all commits in the entire branch (forgive me for breaking lines a bit):

git filter-branch -f --tree-filter \
  '! [ -f src/cluster.cpp ] || sed -i "s/if (mi.rank != 0)/if (mi.rank != 0
    \&\& mi.rank == rank())/" src/cluster.cpp' \
  665155410753978998c8080c813da660fc64bbfe^..cluster-master

This is suitably terrible. Remember, if we only did this for one commit, the change wouldn't be there in the next one (git diff would show that it was immediately reverted), so filter-branch needs to do this over and over again, once for each commit (tree) in the branch. And I wanted multiple fixups, so I had a bunch of these; some of them were as simple as “copy this file from /tmp” and some were shell scripts that did things like running clang-format.

You can do similar things for commit messages; at some point, I figured I should write “cluster” (the official name for the branch) and not “cluster-master” (my local name) in the merge messages, so I could just do

git filter-branch \
  --commit-msg-filter 'sed s/cluster-master/cluster/g' \
  665155410753978998c8080c813da660fc64bbfe^..cluster-master

I also did a bunch of them to fix up my email address (GIT_COMMITTER_EMAIL wasn't properly set), although I cannot honestly remember whether I used --env-filter or something else. Perhaps that was actually with git rebase and `-r --exec 'git commit --amend --no-edit --author …'` or similar. There are many ways to do ugly things. :-)

Eventually, I had the branch mostly in a state where I thought it would be ready for review, but after uploading to GitHub, one reviewer commented that some of my merges against master were commits that didn't exist in master. Huh? That's… surprising.

It took a fair bit of digging to figure out what had happened: git filter-branch had rewritten some commits that it didn't actually have to; the merge sources from upstream. This is normally harmless, since git hashes are deterministic, but these commits were signed by the author! And filter-branch (or perhaps fast-export, upon which it builds?) generally assumes that it can't sign stuff with other people's keys, so it just strips the signatures, deeming that better than having invalid ones sitting around. Now, of course, these commit signatures would still be valid since we didn't change anything, but evidently, filter-branch doesn't have any special code for that.

Removing an object like this (a “gpgsig” attribute, it seems) changes the commit hash, which is where the phantom commits came from. I couldn't get filter-branch to turn it off… but again, parents can be freely changed, diffs don't exist anyway. So I wrote a little script that took in parameters suitable for git commit-tree (mostly the parent list), rewrote known-bad parents to known-good parents, gave the script to git filter-branch --commit-filter, and that solved the problem. (I guess --parent-filter would also have worked; I don't think I saw it in the man page at the time.)

So, well, I won't claim this is an exercise in elegancy. (Perhaps my next adventure will be figuring out how this works in jj, which supposedly has conflicts as more of a first-class concept.) But it got the job done in a couple of hours after fighting with rebase for a long time, the PR was reviewed, and now the Stockfish cluster branch is a little bit more alive.

22 January, 2026 07:45AM

January 21, 2026

hackergotchi for Evgeni Golov

Evgeni Golov

Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generate YAML from ERB, what could possibly go wrong?!

Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.

Enter cloud-init schema, or so I thought. Turns out running cloud-init schema is rather broken without root privileges, as it tries to load a ton of information from the running system. This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself. I've not found a way to disable that behavior.

Luckily, I know Python.

Enter evgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3

import sys
from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError

try:
    valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema())
    if not valid:
        raise RuntimeError("Schema is not valid")
except (SchemaValidationError, RuntimeError) as e:
    print(e)
    sys.exit(1)

The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API, as it will sometimes raise an SchemaValidationError, sometimes a RuntimeError and sometimes just return False. No idea why. But the above just turns it into a couple of printed lines and a non zero exit code, unless of course there are no problems, then you get peaceful silence.

21 January, 2026 07:42PM by evgeni

January 19, 2026

hackergotchi for Jonathan Dowland

Jonathan Dowland

FOSDEM 2026

I'm going to FOSDEM 2026!

I'm presenting in the Containers dev room. My talk is Java Memory Management in Containers and it's scheduled as the first talk on the first day. I'm the warm-up act!

The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in other forms), but sadly there's no Java devroom this year. There's a story about that, but it's not mine to tell.

Please recommend to me any interesting talks! Here's a few that caught my eye:

Debian/related:

Containers:

Research:

Other:

19 January, 2026 02:12PM

Craig Small

WordPress 6.9 for Debian

The Debian packages for WordPress version 6.9 were uploaded today. The upstream website says you can add notes and move graphics around, making for an easier and nicer editing experience.

I’m not personally sure about that, generally if things change to much people get annoyed but it seems at least the initial simple stuff has stayed the same.

19 January, 2026 09:57AM by dropbear

Russell Coker

Furilabs FLX1s

The Aim

I have just got a Furilabs FLX1s [1] which is a phone running a modified version of Debian. I want to have a phone that runs all apps that I control and can observe and debug. Android is very good for what it does and there are security focused forks of Android which have a lot of potential, but for my use a Debian phone is what I want.

The FLX1s is not going to be my ideal phone, I am evaluating it for use as a daily-driver until a phone that meets my ideal criteria is built. In this post I aim to provide information to potential users about what it can do, how it does it, and how to get the basic functions working. I also evaluate how well it meets my usage criteria.

I am not anywhere near an average user. I don’t think an average user would ever even see one unless a more technical relative showed one to them. So while this phone could be used by an average user I am not evaluating it on that basis. But of course the features of the GUI that make a phone usable for an average user will allow a developer to rapidly get past the beginning stages and into more complex stuff.

Features

The Furilabs FLX1s [1] is a phone that is designed to run FuriOS which is a slightly modified version of Debian. The purpose of this is to run Debian instead of Android on a phone. It has switches to disable camera, phone communication, and microphone (similar to the Librem 5) but the one to disable phone communication doesn’t turn off Wifi, the only other phone I know of with such switches is the Purism Librem 5.

It has a 720*1600 display which is only slightly better than the 720*1440 display in the Librem 5 and PinePhone Pro. This doesn’t compare well to the OnePlus 6 from early 2018 with 2280*1080 or the Note9 from late 2018 with 2960*1440 – which are both phones that I’ve run Debian on. The current price is $US499 which isn’t that good when compared to the latest Google Pixel series, a Pixel 10 costs $US649 and has a 2424*1080 display and it also has 12G of RAM while the FLX1s only has 8G. Another annoying thing is how rounded the corners are, it seems that round corners that cut off the content are a standard practice nowadays, in my collection of phones the latest one I found with hard right angles on the display was a Huawei Mate 10 Pro which was released in 2017. The corners are rounder than the Note 9, this annoys me because the screen is not high resolution by today’s standards so losing the corners matters.

The default installation is Phosh (the GNOME shell for phones) and it is very well configured. Based on my experience with older phone users I think I could give a phone with this configuration to a relative in the 70+ age range who has minimal computer knowledge and they would be happy with it. Additionally I could set it up to allow ssh login and instead of going through the phone support thing of trying to describe every GUI setting to click on based on a web page describing menus for the version of Android they are running I could just ssh in and run diff on the .config directory to find out what they changed. Furilabs have done a very good job of setting up the default configuration, while Debian developers deserve a lot of credit for packaging the apps the Furilabs people have chosen a good set of default apps to install to get it going and appear to have made some noteworthy changes to some of them.

Droidian

The OS is based on Android drivers (using the same techniques as Droidian [2]) and the storage device has the huge number of partitions you expect from Android as well as a 110G Ext4 filesystem for the main OS.

The first issue with the Droidian approach of using an Android kernel and containers for user space code to deal with drivers is that it doesn’t work that well. There are 3 D state processes (uninterrupteable sleep – which usually means a kernel bug if the process remains in that state) after booting and doing nothing special. My tests running Droidian on the Note 9 also had D state processes, in this case they are D state kernel threads (I can’t remember if the Note 9 had regular processes or kernel threads stuck in D state). It is possible for a system to have full functionality in spite of some kernel threads in D state but generally it’s a symptom of things not working as well as you would hope.

The design of Droidian is inherently fragile. You use a kernel and user space code from Android and then use Debian for the rest. You can’t do everything the Android way (with the full OS updates etc) and you also can’t do everything the Debian way. The TOW Boot functionality in the PinePhone Pro is really handy for recovery [3], it allows the internal storage to be accessed as a USB mass storage device. The full Android setup with ADB has some OK options for recovery, but part Android and part Debian has less options. While it probably is technically possible to do the same things in regard to OS repair and reinstall the fact that it’s different from most other devices means that fixes can’t be done in the same way.

Applications

GUI

The system uses Phosh and Phoc, the GNOME system for handheld devices. It’s a very different UI from Android, I prefer Android but it is usable with Phosh.

IM

Chatty works well for Jabber (XMPP) in my tests. It supports Matrix which I didn’t test because I don’t desire the same program doing Matrix and Jabber and because Matrix is a heavy protocol which establishes new security keys for each login so I don’t want to keep logging in on new applications.

Chatty also does SMS but I couldn’t test that without the SIM caddy.

I use Nheko for Matrix which has worked very well for me on desktops and laptops running Debian.

Email

I am currently using Geary for email. It works reasonably well but is lacking proper management of folders, so I can’t just subscribe to the important email on my phone so that bandwidth isn’t wasted on less important email (there is a GNOME gitlab issue about this – see the Debian Wiki page about Mobile apps [4]).

Music

Music playing isn’t a noteworthy thing for a desktop or laptop, but a good music player is important for phone use. The Lollypop music player generally does everything you expect along with support for all the encoding formats including FLAC0 – a major limitation of most Android music players seems to be lack of support for some of the common encoding formats. Lollypop has it’s controls for pause/play and going forward and backward one track on the lock screen.

Maps

The installed map program is gnome-maps which works reasonably well. It gets directions via the Graphhopper API [5]. One thing we really need is a FOSS replacement for Graphhopper in GNOME Maps.

Delivery and Unboxing

I received my FLX1s on the 13th of Jan [1]. I had paid for it on the 16th of Oct but hadn’t received the email with the confirmation link so the order had been put on hold. But after I contacted support about that on the 5th of Jan they rapidly got it to me which was good. They also gave me a free case and screen protector to apologise, I don’t usually use screen protectors but in this case it might be useful as the edges of the case don’t even extend 0.5mm above the screen. So if it falls face down the case won’t help much.

When I got it there was an open space at the bottom where the caddy for SIMs is supposed to be. So I couldn’t immediately test VoLTE functionality. The contact form on their web site wasn’t working when I tried to report that and the email for support was bouncing.

Bluetooth

As a test of Bluetooth I connected it to my Nissan LEAF which worked well for playing music and I connected it to several Bluetooth headphones. My Thinkpad running Debian/Trixie doesn’t connect to the LEAF and to headphones which have worked on previous laptops running Debian and Ubuntu. A friend’s laptop running Debian/Trixie also wouldn’t connect to the LEAF so I suspect a bug in Trixie, I need to spend more time investigating this.

Wifi

Currently 5GHz wifi doesn’t work, this is a software bug that the Furilabs people are working on. 2.4GHz wifi works fine. I haven’t tested running a hotspot due to being unable to get 4G working as they haven’t yet shipped me the SIM caddy.

Docking

This phone doesn’t support DP Alt-mode or Thunderbolt docking so it can’t drive an external monitor. This is disappointing, Samsung phones and tablets have supported such things since long before USB-C was invented. Samsung DeX is quite handy for Android devices and that type feature is much more useful on a device running Debian than on an Android device.

Camera

The camera works reasonably well on the FLX1s. Until recently for the Librem 5 the camera didn’t work and the camera on my PinePhone Pro currently doesn’t work. Here are samples of the regular camera and the selfie camera on the FLX1s and the Note 9. I think this shows that the camera is pretty decent. The selfie looks better and the front camera is worse for the relatively close photo of a laptop screen – taking photos of computer screens is an important part of my work but I can probably work around that.

I wasn’t assessing this camera t find out if it’s great, just to find out if I have the sorts of problems I had before and it just worked. The Samsung Galaxy Note series of phones has always had decent specs including good cameras. Even though the Note 9 is old comparing to it is a respectable performance. The lighting was poor for all photos.

FLX1s


Note 9


Power Use

In 93 minutes having the PinePhone Pro, Librem 5, and FLX1s online with open ssh sessions from my workstation the PinePhone Pro went from 100% battery to 26%, the Librem 5 went from 95% to 69%, and the FLX1s went from 100% to 99%. The battery discharge rate of them was reported as 3.0W, 2.6W, and 0.39W respectively. Based on having a 16.7Wh battery 93 minutes of use should have been close to 4% battery use, but in any case all measurements make it clear that the FLX1s will have a much longer battery life. Including the measurement of just putting my fingers on the phones and feeling the temperature (FLX1s felt cool and the others felt hot).

The PinePhone Pro and the Librem 5 have an optional “Caffeine mode” which I enabled for this test, without that enabled the phone goes into a sleep state and disconnects from Wifi. So those phones would use much less power with caffeine mode enabled, but they also couldn’t get fast response to notifications etc. I found the option to enable a Caffeine mode switch on the FLX1s but the power use was reported as being the same both with and without it.

Charging

One problem I found with my phone is that in every case it takes 22 seconds to negotiate power. Even when using straight USB charging (no BC or PD) it doesn’t draw any current for 22 seconds. When I connect it it will stay at 5V and varying between 0W and 0.1W (current rounded off to zero) for 22 seconds or so and then start charging. After the 22 second display the phone will make the tick sound indicating that it’s charging and the power meter will measure that it’s drawing some current.

I added the table from my previous post about phone charging speed [6] with an extra row for the FLX1s. For charging from my PC USB ports the results were the worst ever, the port that does BC did not work at all it was looping trying to negotiate after a 22 second negotiation delay the port would turn off. The non-BC port gave only 2.4W which matches the 2.5W given by the spec for a “High-power device” which is what that port is designed to give. In a discussion on the Purism forum about the Librem5 charging speed one of their engineers told me that the reason why their phone would draw 2A from that port was because the cable was identifying itself as a USB-C port not a “High-power device” port. But for some reason out of the 7 phones I tested the FLX1s and the One Plus 6 are the only ones to limit themselves to what the port is apparently supposed to do. Also the One Plus 6 charges slowly on every power supply so I don’t know if it is obeying the spec or just sucking.

On a cheap AliExpress charger the FLX1s gets 5.9V and on a USB battery it gets 5.8V. Out of all 42 combinations of device and charger I tested these were the only ones to involve more than 5.1V but less than 9V. I welcome comments suggesting an explanation.

The case that I received has a hole for the USB-C connector that isn’t wide enough for the plastic surrounds on most of my USB-C cables (including the Dell dock). Also to make a connection requires a fairly deep insertion (deeper than the One Plus 6 or the Note 9). So without adjustment I have to take the case off to charge it. It’s no big deal to adjust the hole (I have done it with other cases) but it’s an annoyance.

Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
FLX1s FAIL 5.0V 0.49A 2.4W 4.8V 1.9A 9.0W 5.9V 1.8A 11W 4.8V 2.1A 10W 5.8V 2.1A 12W 5.8V 2.1A 12W 5.0V 0.49A 2.4W
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W

Conclusion

The Furilabs support people are friendly and enthusiastic but my customer experience wasn’t ideal. It was good that they could quickly respond to my missing order status and the missing SIM caddy (which I still haven’t received but believe is in the mail) but it would be better if such things just didn’t happen.

The phone is quite user friendly and could be used by a novice.

I paid $US577 for the FLX1s which is $AU863 by today’s exchange rates. For comparison I could get a refurbished Pixel 9 Pro Fold for $891 from Kogan (the major Australian mail-order company for technology) or a refurbished Pixel 9 Pro XL for $842. The Pixel 9 series has security support until 2031 which is probably longer than you can expect a phone to be used without being broken. So a phone with a much higher resolution screen that’s only one generation behind the latest high end phones and is refurbished will cost less. For a brand new phone a Pixel 8 Pro which has security updates until 2030 costs $874 and a Pixel 9A which has security updates until 2032 costs $861.

Doing what the Furilabs people have done is not a small project. It’s a significant amount of work and the prices of their products need to cover that. I’m not saying that the prices are bad, just that economies of scale and the large quantity of older stock makes the older Google products quite good value for money. The new Pixel phones of the latest models are unreasonably expensive. The Pixel 10 is selling new from Google for $AU1,149 which I consider a ridiculous price that I would not pay given the market for used phones etc. If I had a choice of $1,149 or a “feature phone” I’d pay $1,149. But the FLX1s for $863 is a much better option for me. If all I had to choose from was a new Pixel 10 or a FLX1s for my parents I’d get them the FLX1s.

For a FOSS developer a FLX1s could be a mobile test and development system which could be lent to a relative when their main phone breaks and the replacement is on order. It seems to be fit for use as a commodity phone. Note that I give this review on the assumption that SMS and VoLTE will just work, I haven’t tested them yet.

The UI on the FLX1s is functional and easy enough for a new user while allowing an advanced user to do the things they desire. I prefer the Android style and the Plasma Mobile style is closer to Android than Phosh is, but changing it is something I can do later. Generally I think that the differences between UIs matter more when on a desktop environment that could be used for more complex tasks than on a phone which limits what can be done by the size of the screen.

I am comparing the FLX1s to Android phones on the basis of what technology is available. But most people who would consider buying this phone will compare it to the PinePhone Pro and the Librem 5 as they have similar uses. The FLX1s beats both those phones handily in terms of battery life and of having everything just work. But it has the most non free software of the three and the people who want the $2000 Librem 5 that’s entirely made in the US won’t want the FLX1s.

This isn’t the destination for Debian based phones, but it’s a good step on the way to it and I don’t think I’ll regret this purchase.

19 January, 2026 06:43AM by etbe

Vincent Bernat

RAID 5 with mixed-capacity disks on Linux

Standard RAID solutions waste space when disks have different sizes. Linux software RAID with LVM uses the full capacity of each disk and lets you grow storage by replacing one or two disks at a time.1

We start with four disks of equal size:

$ lsblk -Mo NAME,TYPE,SIZE
NAME TYPE  SIZE
vda  disk  101M
vdb  disk  101M
vdc  disk  101M
vdd  disk  101M

We create one partition on each of them:

$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vda
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdb
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdc
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdd
$ lsblk -Mo NAME,TYPE,SIZE
NAME   TYPE  SIZE
vda    disk  101M
└─vda1 part  100M
vdb    disk  101M
└─vdb1 part  100M
vdc    disk  101M
└─vdc1 part  100M
vdd    disk  101M
└─vdd1 part  100M

We set up a RAID 5 device by assembling the four partitions:2

$ mdadm --create /dev/md0 --level=raid5 --bitmap=internal --raid-devices=4 \
>   /dev/vda1 /dev/vdb1 /dev/vdc1 /dev/vdd1
$ lsblk -Mo NAME,TYPE,SIZE
    NAME          TYPE    SIZE
    vda           disk    101M
┌┈▶ └─vda1        part    100M
┆   vdb           disk    101M
├┈▶ └─vdb1        part    100M
┆   vdc           disk    101M
├┈▶ └─vdc1        part    100M
┆   vdd           disk    101M
└┬▶ └─vdd1        part    100M
 └┈┈md0           raid5 292.5M
$ cat /proc/mdstat
md0 : active raid5 vdd1[4] vdc1[2] vdb1[1] vda1[0]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

We use LVM to create logical volumes on top of the RAID 5 device.

$ pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.
$ vgcreate data /dev/md0
  Volume group "data" successfully created
$ lvcreate -L 100m -n bits data
  Logical volume "bits" created.
$ lvcreate -L 100m -n pieces data
  Logical volume "pieces" created.
$ mkfs.ext4 -q /dev/data/bits
$ mkfs.ext4 -q /dev/data/pieces
$ lsblk -Mo NAME,TYPE,SIZE
    NAME          TYPE    SIZE
    vda           disk    101M
┌┈▶ └─vda1        part    100M
┆   vdb           disk    101M
├┈▶ └─vdb1        part    100M
┆   vdc           disk    101M
├┈▶ └─vdc1        part    100M
┆   vdd           disk    101M
└┬▶ └─vdd1        part    100M
 └┈┈md0           raid5 292.5M
    ├─data-bits   lvm     100M
    └─data-pieces lvm     100M
$ vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   1   2   0 wz--n- 288.00m 88.00m

This gives us the following setup:

One RAID 5 device built from four partitions from four disks of equal capacity. The RAID device is part of an LVM volume group with two logical volumes.
RAID 5 setup with disks of equal capacity

We replace /dev/vda with a bigger disk. We add it back to the RAID 5 array after copying the partitions from /dev/vdb:

$ cat /proc/mdstat
md0 : active (auto-read-only) raid5 vdb1[1] vdd1[4] vdc1[2]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ sgdisk --replicate=/dev/vda /dev/vdb
$ sgdisk --randomize-guids /dev/vda
$ mdadm --manage /dev/md0 --add /dev/vda1
$ cat /proc/mdstat
md0 : active raid5 vda1[5] vdb1[1] vdd1[4] vdc1[2]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

We do not use the additional capacity: this setup would not survive the loss of /dev/vda because we have no spare capacity. We need a second disk replacement, like /dev/vdb:

$ cat /proc/mdstat
md0 : active (auto-read-only) raid5 vda1[5] vdd1[4] vdc1[2]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ sgdisk --replicate=/dev/vdb /dev/vdc
$ sgdisk --randomize-guids /dev/vdb
$ mdadm --manage /dev/md0 --add /dev/vdb1
$ cat /proc/mdstat
md0 : active raid5 vdb1[6] vda1[5] vdd1[4] vdc1[2]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

We create a new RAID 1 array by using the free space on /dev/vda and /dev/vdb:

$ sgdisk --new=0:0:0 -t 0:fd00 /dev/vda
$ sgdisk --new=0:0:0 -t 0:fd00 /dev/vdb
$ mdadm --create /dev/md1 --level=raid1 --bitmap=internal --raid-devices=2 \
>   /dev/vda2 /dev/vdb2
$ cat /proc/mdstat
md1 : active raid1 vdb2[1] vda2[0]
      101312 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid5 vdb1[6] vda1[5] vdd1[4] vdc1[2]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

We add /dev/md1 to the volume group:

$ pvcreate /dev/md1
  Physical volume "/dev/md1" successfully created.
$ vgextend data /dev/md1
  Volume group "data" successfully extended
$ vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   2   2   0 wz--n- 384.00m 184.00m
$  lsblk -Mo NAME,TYPE,SIZE
       NAME          TYPE    SIZE
       vda           disk    201M
   ┌┈▶ ├─vda1        part    100M
┌┈▶┆   └─vda2        part    100M
┆  ┆   vdb           disk    201M
┆  ├┈▶ ├─vdb1        part    100M
└┬▶┆   └─vdb2        part    100M
 └┈┆┈┈┈md1           raid1  98.9M
   ┆   vdc           disk    101M
   ├┈▶ └─vdc1        part    100M
   ┆   vdd           disk    101M
   └┬▶ └─vdd1        part    100M
    └┈┈md0           raid5 292.5M
       ├─data-bits   lvm     100M
       └─data-pieces lvm     100M

This gives us the following setup:3

One RAID 5 device built from four partitions and one RAID 1 device built from two partitions. The two last disks are smaller. The two RAID devices are part of a single LVM volume group.
Setup mixing both RAID 1 and RAID 5

We extend our capacity further by replacing /dev/vdc:

$ cat /proc/mdstat
md1 : active (auto-read-only) raid1 vda2[0] vdb2[1]
      101312 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active (auto-read-only) raid5 vda1[5] vdd1[4] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ sgdisk --replicate=/dev/vdc /dev/vdb
$ sgdisk --randomize-guids /dev/vdc
$ mdadm --manage /dev/md0 --add /dev/vdc1
$ cat /proc/mdstat
md1 : active (auto-read-only) raid1 vda2[0] vdb2[1]
      101312 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

Then, we convert /dev/md1 from RAID 1 to RAID 5:

$ mdadm --grow /dev/md1 --level=5 --raid-devices=3 --add /dev/vdc2
mdadm: level of /dev/md1 changed to raid5
mdadm: added /dev/vdc2
$ cat /proc/mdstat
md1 : active raid5 vdc2[2] vda2[0] vdb2[1]
      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid5 vdc1[7] vda1[5] vdd1[4] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ pvresize /dev/md1
$ vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   2   2   0 wz--n- 482.00m 282.00m

This gives us the following layout:

Two RAID 5 devices built from four disks of different sizes. The last disk is smaller and contains only one partition, while the others have two partitions: one for /dev/md0 and one for /dev/md1. The two RAID devices are part of a single LVM volume group.
RAID 5 setup with mixed-capacity disks using partitions and LVM

We further extend our capacity by replacing /dev/vdd:

$ cat /proc/mdstat
md0 : active (auto-read-only) raid5 vda1[5] vdc1[7] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1]
      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ sgdisk --replicate=/dev/vdd /dev/vdc
$ sgdisk --randomize-guids /dev/vdd
$ mdadm --manage /dev/md0 --add /dev/vdd1
$ cat /proc/mdstat
md0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid5 vda2[0] vdc2[2] vdb2[1]
      202624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

We grow the second RAID 5 array:

$ mdadm --grow /dev/md1 --raid-devices=4 --add /dev/vdd2
mdadm: added /dev/vdd2
$ cat /proc/mdstat
md0 : active raid5 vdd1[4] vda1[5] vdc1[7] vdb1[6]
      299520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid5 vdd2[3] vda2[0] vdc2[2] vdb2[1]
      303936 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
$ pvresize /dev/md1
$ vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  data   2   2   0 wz--n- 580.00m 380.00m
$ lsblk -Mo NAME,TYPE,SIZE
       NAME          TYPE    SIZE
       vda           disk    201M
   ┌┈▶ ├─vda1        part    100M
┌┈▶┆   └─vda2        part    100M
┆  ┆   vdb           disk    201M
┆  ├┈▶ ├─vdb1        part    100M
├┈▶┆   └─vdb2        part    100M
┆  ┆   vdc           disk    201M
┆  ├┈▶ ├─vdc1        part    100M
├┈▶┆   └─vdc2        part    100M
┆  ┆   vdd           disk    301M
┆  └┬▶ ├─vdd1        part    100M
└┬▶ ┆  └─vdd2        part    100M
 ┆  └┈┈md0           raid5 292.5M
 ┆     ├─data-bits   lvm     100M
 ┆     └─data-pieces lvm     100M
 └┈┈┈┈┈md1           raid5 296.8M

You can continue by replacing each disk one by one using the same steps. ♾️


  1. This is the same approach Synology uses for its Hybrid RAID and what the OpenMediaVault community calls a “Sliced Hybrid RAID.” ↩︎

  2. Write-intent bitmaps speed up recovery of the RAID array after a power failure by marking unsynchronized regions as dirty. They have an impact on performance, but I did not measure it myself. An alternative is to use a partial parity log with --consistency-policy=ppl↩︎

  3. In the lsblk output, /dev/md1 appears unused because the logical volumes do not use any space from it yet. Once you create more logical volumes or extend them, lsblk will reflect the usage. ↩︎

19 January, 2026 05:49AM by Vincent Bernat

Dima Kogan

mrcal 2.5 released!

mrcal 2.5 is out: the release notes. Once again, this is mostly a bug-fix release en route to the big new features coming in 3.0.

One cool thing is that these tools have now matured enough to no longer be considered experimental. They have been used with great success in lots of contexts across many different projects and organizations. Some highlights:

  • I've calibrated extremely wide lenses
  • and extremely narrow lenses
  • and joint systems containing many different kinds of lenses
  • with lots of cameras at the same time. The biggest single joint calibration I've done today had 10 cameras, but I'll almost certainly encounter bigger systems in the future
  • mrcal has been used to process both visible and thermal cameras
  • The new triangulated-feature capability has been used in a structure-from-motion context to compute the world geometry on-line.
  • mrcal has been used with weird experimental setups employing custom calibration objects and single-view solves
  • mrcal has calibrated joint camera-LIDAR systems
  • and joint camera-IMU systems
  • Lots of students use mrcal as part of PhotonVision, the toolkit used by teams in the FIRST Robotics Competition

Some of the above is new, and not yet fully polished and documented and tested, but it works.

In mrcal 2.5, most of the implementation of some new big features is written and committed, but it's still incomplete. The new stuff is there, but is lightly tested and documented. This will be completed eventually in mrcal 3.0:

  • Cross-reprojection uncertainty, to be able to perform full calibrations with a splined model and without a chessboard. mrcal-show-projection-uncertainty --method cross-reprojection-rrp-Jfp is available today, and works in the usual moving-chessboard-stationary camera case. Fully boardless coming later.
  • More general view of uncertainty and diffs. I want to support extrinsics-only and/or intrinsics computations-only in lots of scenarios. Uncertainty in point solves is already available in some conditions, for instance if the points are fixed. New mrcal-show-stereo-pair-diff tool reports an extrinsics+intrinsics diff between two calibrations of a stereo pair; experimental analyses/extrinsics-stability.py tool reports an extrinsics-only diff. These are in contrast to the intrinsics-only uncertainty and diffs in the existing mrcal-show-projection-diff and mrcal-show-projection-uncertainty tools. Some documentation in the uncertainty and differencing pages.
  • Implicit point solves, using the triangulation routines in the optimization cost function. Should produce much more efficient structure-from-motion solves. This is all the "triangulated-features" stuff. The cost function is primarily built around _mrcal_triangulated_error(). This is demoed in test/test-sfm-triangulated-points.py. And I've been using _mrcal_triangulated_error() in structure-from-motion implementations within other optimization routines.

mrcal is quite good already, and will be even better in the future. Try it today!

19 January, 2026 12:00AM by Dima Kogan