Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

January 14, 2026

Swiss JuristGate

Milenium Crypto, Atreide Ventures SAS, Badra Fettache linked to Mathieu Elias Parreaux

Financial markets are undergoing rapid change at the moment and mum-and-dad investors are looking for advice and inspiration.

A Youtube channel has appeared using the name MileniumCrypto ( Youtube link) to help people. The channel publishes French language videos recorded by an anonymous male voice.

The channel has a link to a French web site, https://www.milenium-crypto.fr which links to another web site https://www.atreide-ventures.fr. That web site, in turn is associated with a French company using the name Atreide Ventures SAS (SIREN 931 427 561).

The company was founded in July 2024 using another French company, Hamilton Invest (SIREN 908 634 793) as a stooge director.

Hamilton Invest itself has a single director, Marlon (Sean) Hamilton-Bekhti, born in January 1990. Mr Hamilton-Bekhti is associated with at least seven French companies and two British companies Light Keys Conciergerie Limited and Lowlands Inc Limited. The British records at Companies House tell us he is a French citizen with a domicile in Switzerland.

There is another investment company with exactly the same name operating in Hong Kong and the Philipines. The web site https://atreidesventures.co/ has been registered since 5 October 2023. Their web site has a picture of their directors and they appear to be a completely different group of people. It looks like the French scam company has chosen to use the name of an existing business to obfuscate themselves behind the MileniumCrypto brand.

For the first nine months the business was trading, nobody sending investments to the company would have any way to know it was connected to the Parreaux community.

Approximately nine months later, in March 2025, the French Atreide Ventures SAS company removed the first stooge director Hamilton Invest and replaced it with Didier Simon René Parreaux, born January 1960.

We found him living in Thonex, Switzerland and linked to the company Hach Lange Sàrl. Here we found a photo of him. We presume he is related to Mathieu Elias Parreaux (bankrupt and banned), Hodan Parreaux-Hussein, Marla Sofia Parreaux and Arnaud Parreaux.

Didier Parreaux only remained as a director for three months. On 16 July 2025, he was removed and replaced by two new directors, Marla Sofia Parreaux and Badra Fettache, born in November 1973.

On 10 October 2025, the scam company Atreide Ventures SAS changed its business address from the French village of Gaillard, on the Swiss border, to the address 7 impasse Paquet-Merel, 69009, Lyon. Lyon is the global headquarters of Interpol.

The new address appears to be a residential building and the French residence of Madame Badra Fettache.

Badra Fettache has also operated a Swiss business, ASC Consulting in Geneva and more remarkably, when Mathieu Elias Parreaux became bankrupt, Badra took over control of one of his other companies, Polytek SA.

We found that Badra Fettache is also listed as a former employee of the city of Lyon.

Compare the man's voice in the milenium-crypto videos on Youtube to the recordings of Mathieu Parreaux on Radio Lac.

milenium-crypto is really Mathieu Parreaux asking more French victims to advance money into another Ponzi scheme.

On 24 October 2025, the French Autorité Marchés Financiers published an official warning about the web site atreide-ventures.fr.

Keen observers of the scandal watched all the videos and found a moment where Parreaux's desktop is visible and we can see his name there too.

Cryptocurrency is a great concept for specialists but for everybody else, it is simply a new way to get scammed. Beware.

Remember, Mathieu Elias Parreaux is a Swiss jurist and he started his previous business, Parreaux, Thiébaud & Partners while working as the greffier in the Tribunal of Monthey, Canton Valais.

The Swiss financial regulator knew about his activities for five years before they shut him down.

The French authorities have published a warning about his business but they did not shut the business down. The business is still trading and publishing videos every few days.

Please see the rest of the JuristGate reports.

14 January, 2026 10:30PM

Fancy Concept, Marla Sofia Parreaux & cryptocurrency scandals

The bankruptcy of Mathieu Parreaux published his date of birth (11 November 1992) and his home address at Allée des Roseaux 1, Châtelaine 1219, Switzerland. He is a Swiss citizen originating from the commune of La Brévine, Canton Neuchatel.

Searching the address in public business records, we find a sole trader using the business name Fancy Concept. The business is at the same address and the proprietor is Marla Sofia Parreaux. Marla's date of birth appears in another public business register as August 1997. Marla is also linked to addresses in France, in both Gaillard and Lyon Vaise, for a business Atreide Ventures SAS (931 427 561) which is a front for Milenium Crypto.

Based on these dates of birth, people have speculated whether Marla is the wife, sister or cousin of ringleader Mathieu Parreaux.

We could not find any public web site for Fancy Concept in Switzerland. We found various other businesses around Europe using exactly the same name, including Fancy Concept in Bari, Italy and Fancy Concept in Sterrebeek, Belgium.

What was the purpose of creating this business? Did the Parreaux community hope to benefit from the brand created by other businesses using the same generic name? Or did they hope to be obfuscated by the very generic name and the presence of many similar names in search results?

With a very generic business name like this, it could just as easily be selling women's products, cryptocurrencies or even both at the same time.

Mathieu Parreaux was subject to a bankruptcy procedure in 2023. Marla Parreaux began the new business Fancy Concept on 26 September 2024. Did this business provide a way for Mathieu Parreaux to continue trading under a business registered to his wife?

Atreide Ventures was created in France at almost exactly the same time but using other directors. Marla Parreaux was only added as a director in 2025. When Atreide Ventures was created, they stated the business objective:

Activités auxiliaires de services financiers et d'assurance

Translated to English, they state that Atreide Ventures provides auxiliary services for the finance and insurance industries. Yet FINMA's order from April 2023 bans Mathieu Parreaux from working in insurance.

Did Marla Parreaux even know that a business had been created in her name? Remember, Mathieu Parreaux and some of his jurist colleagues had also worked as greffier in the tribunal. He may have been able to use some mechanism to complete the business registration without Marla Parreaux being aware her name was used as a director.

While that seems far-fetched, remember they had fooled a woman in France to quit her old job and come and work for them immediately before FINMA shut down the whole scam.

Please see the rest of the JuristGate reports.

14 January, 2026 10:00PM

hackergotchi for Daniel Pocock

Daniel Pocock

Proton Mail compromise, Hannah Natanson (Washington Post) police raid & Debian

A few days ago, the JuristGate web site published a few words about how Debian has tried to use Swiss police against Proton Mail.

Proton Mail is based at CERN in Switzerland.

In recent years, activists have tried to reassure journalists and NGOs that Proton Mail is somehow safer than other web mail and cloud services.

When I saw news reports today about a police raid on the home of Hannah Natanson, a Washington Post journalist, I thought about the possibility that more journalists are going to be prompted to migrate from Google Gmail to Proton Mail. They may simply be jumping from the frying pan and into the fire (but not the Large Hadron Collider).

Anybody who has any doubts about where this is going only has to read the evidence published in the JuristGate report.

For those thinking about using the Tor network to protect their privacy, it is a good moment to read about the falsification of "harassment" claims against Dr Jacob Appelbaum.

Remember the Marion County Record. I was very fortunate to visit the community in Marion, Kansas on the first anniversary of the raid.

For decades, the American and German spy agencies operated a fake security company called Crypto AG in Switzerland. They used the Swiss company domicile to obfuscate the real ownership of the crypto software. Many other companies and countries were fooled and put their trust in it. Could Proton Mail be another example of the same trick?

Daniel Pocock, Marion County Record, Kansas

14 January, 2026 07:30PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.15 on CRAN: New Upstream, Some Maintenance

A brand new release 0.1.15 of the RcppSimdJson package is now on CRAN.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This version updates to the current 4.2.4 upstream release. It also updates the RcppExports.cpp file with ‘glue’ between C++ and R. We want move away from using Rf_error() (as Rcpp::stop() is generally preferable). Packages (such as this one) that are declaring an interface have an actual Rf_error() call generated in RcppExports.cpp which can protect which is what current Rcpp code generation does. Long story short, a minor internal reason.

The short NEWS entry for this release follows.

Changes in version 0.1.15 (2026-01-14)

  • simdjson was upgraded to version 4.2.4 (Dirk in #97

  • RcppExports.cpp was regenerated to aid a Rcpp transition

  • Standard maintenance updates for continuous integration and URLs

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

14 January, 2026 04:11PM

gunsales 0.1.3 on CRAN: Maintenance

An update to the gunsales package is now on CRAN. As in the last update nine years ago (!!), changes are mostly internal. An upcoming dplyr change requires a switch from the old and soon to-be-removed ‘underscored’ verb form; that was kindly addressed in an incoming pull request. We also updated the CI scripts a few times during this period as needed, and switched to using Authors@R, and refreshed and updated a number of URL references.

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 January, 2026 11:21AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

The Innovation Engine • Government-funded Academic Research

This post is an unpublished review for The Innovation Engine • Government-funded Academic Research

David Patterson does not need an introduction. Being the brain behind many of the inventions that shaped the computing industry (repeatedly) over the past 40 years, when he put forward an opinion article in Communications of the ACM targeting the current day political waves in the USA, I could not avoid choosing it to write this review.

Patterson worked for a a public university (University of California at Berkeley) between 1976 and 2016, and in this article he argues how government-funded academic research (GoFAR) allows for faster, more effective and freer development than private sector-funded research would, putting his own career milestones as an example of how public money that went to his research has easily been amplified by a factor of 10,000:1 for the country’s economy, and 1,000:1 particularly for the government.

Patterson illustrates this quoting five of the “home-run” research projects he started and pursued with government funding, eventually spinning them off as successful startups:

  • RISC (Reduced Instruction Set Computing): Microprocessor architecture that reduces the complexity and power consumption of CPUs, yielding much smaller and more efficient processors.
  • RAID (Redundant Array of Inexpensive Disks): Patterson experimented with a way to present a series of independent hard drive units as if they were a single, larger one, leading to increases in capacity and reliability beyond what the industry could provide in single drives, for a fraction of the price.
  • NOW (Network Of Workstations): Introduced what we now know as computer clusters (in contrast of large-scale massively multiprocessed cache-coherent systems known as “supercomputers”), which nowadays power over 80% of the Top500 supercomputer list and are the computer platform of choice of practically all data centers.
  • RAD Lab (Reliable Adaptive Distributed Systems Lab): Pursued the technology for data centers to be self-healing and self-managing, testing and pushing early cloud-scalability limits
  • ParLab (Parallel Computing Lab): Given the development of massively parallel processing inside even simple microprocessors, this lab explores how to improve designs of parallel software and hardware, presenting the ground works that proved that inherently parallel GPUs were better than CPUs at machine learning tasks. It also developed the RISC-V open instruction set architecture.

Patterson identifies principles for the projects he has led, that are specially compatible with the ways research works at universitary systems: Multidisciplinary teams, demonstrative usable artifacts, seven- to ten-year impact horizons, five-year sunset clauses (to create urgency and to lower opportunity costs), physical proximity of collaborators, and leadership followed on team success rather than individual recognition.

While it could be argued that it’s easy to point at Patterson’s work as a success example while he is by far not the average academic, the points he makes on how GoFAR research has been fundamental for the advance of science and technology, but also of biology, medicine, and several other fields are very clear.

14 January, 2026 12:29AM

January 13, 2026

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.24 released

I've released version 1.1.24 of plocate, as usual dominated by small patches from external contributors. The changelog is below:

plocate 1.1.24, January 13th, 2026

  - Improve error handling on synchronous reads. Reported by
    Björn Försterling.

  - Remove ConditionACPower=true from the systemd unit file,
    to fix an issue where certain charging patterns prevent
    updatedb from ever running on laptops. Patch by Manfred Schwarb.

  - Add a new option --config-file for changing the path of
    updatedb.conf. Patch by Yehuda Bernáth.

As always, you can get it from the plocate page or your favourite Linux distribution (packages to Debian unstable are on their way up, others will surely follow soon).

13 January, 2026 10:57PM

hackergotchi for Thomas Lange

Thomas Lange

30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

13 January, 2026 02:23PM

Simon Josefsson

Debian Libre Live 13.3.0 is released!

Following up on my initial announcement about Debian Libre Live I am happy to report on continued progress and the release of Debian Libre Live version 13.3.0.

Since both this and the previous 13.2.0 release are based on the stable Debian trixie release, there really isn’t a lot of major changes but instead incremental minor progress for the installation process. Repeated installations has a tendency to reveal bugs, and we have resolved the apt sources list confusion for Calamares-based installations and a couple of other nits. This release is more polished and we are not aware of any known remaining issues with them (unlike for earlier versions which were released with known problems), although we conservatively regard the project as still in beta. A Debian Libre Live logo is needed before marking this as stable, any graphically talented takers? (Please base it on the Debian SVG upstream logo image.)

We provide GNOME, KDE, and XFCE desktop images, as well as text-only “standard” image, which match the regular Debian Live images with non-free software on them, but also provide a “slim” variant which is merely 750MB compared to the 1.9GB “standard” image. The slim image can still start a debian installer, and can still boot into a minimal live text-based system.

The GNOME, KDE and XFCE desktop images feature the Calamares installer, and we have performed testing on a variety of machines. The standard and slim images does not have a installer from the running live system, but all images support a boot menu entry to start the installer.

With this release we also extend our arm64 support to two tested platforms. The current list of successfully installed and supported systems now include the following hardware:

This is a very limited set of machines, but the diversity in CPUs and architecture should hopefully reflect well on a wide variety of commonly available machines. Several of these machines are crippled (usually GPU or WiFI) without adding non-free software, complain at your hardware vendor and adapt your use-cases and future purchases.

The images are as follows, with SHA256SUM checksums and GnuPG signature on the 13.3.0 release page.

Curious how the images were made? Fear not, for the Debian Libre Live project README has documentation, the run.sh script is short and the .gitlab-ci.yml CI/CD Pipeline definition file brief.

Happy Libre OS hacking!

13 January, 2026 01:53PM by simon

January 12, 2026

Swiss JuristGate

PLR & Le Centre incest: deputy public safety chief, mayor, prosecutor & Canton Valais minister all same party

In the first JuristGate report about a cover-up after the tragic fire at Le Constellation, Crans-Montana, we noted that two of the four key officials present at the media briefing, the public safety minister ( Stéphane Ganzer) and the chief prosecutor for Canton Valais ( Béatrice Pilloud) are both members of the same political party, that is, the Liberal Radical Party (PLR).

In fact, looking at the list of employees in the commune, we can find another even more disturbing connection to the PLR.

The mayor of the Crans-Montana municipal council is Nicolas Féraud. He is a member of the PLR, along with one other council member, Sébastien Rey.

After the fire, the council hid the page about their public safety team. Fortunately, one snapshot was captured by the Wayback Machine.

Bryan Frily, Stecy Florey, Christophe Balet, Benjamin Charpiot, Baptiste Cotter

 

The people in the photo are Bryan Frily, Stecy Florey, Christophe Balet, the head of the public safety team, who is responsible for the building safety inspections, Benjamin Charpiot, the deputy head of the public safety team and Baptiste Cotter.

Searching the history of all team members, we quickly found that Benjamin Charpiot, the deputy head of public safety and inspections, was also a previous election candidate for the the Liberal Radical Party (PLR).

Here is a news report from 2016 where they interview Charpiot about his candidacy with the PLR.

Here is a link to the PLR page with his candidacy.

Le Centre (The Center party) is at it too

If there is a conflict of interest, surely the prosecutor, Béatrice Pilloud can simply step aside and allow her deputy to handle the investigation?

Pilloud's deputy is Catherine Seppey from another political party, Le Centre (The Center).

In the municipal council of Crans-Montana, three out of seven elected council members are also members of the same party as the deputy prosecutor. François Berclaz, Marielle Clivaz and Patrick Clivaz.

Of those three, Patrick Clivaz is also the councilor responsible for public safety.

Therefore, the deputy prosecutor also has a conflict of interest with respect to members of the municipal council.

Risks of nepotism and "copinage"

The Swiss media and social media have been quick to warn about the risk of "copinage". Everybody in Switzerland knows what its like to live in a little village like this.

Notice how five members of the municipal council share two surnames, Clivaz (3) and Rey (2).

Better for prosecutors to step aside early

Other countries have now assigned their own prosecutors to supervise the investigation. In Paris, Rome and Brussels, senior prosecutors have been assigned to open their own cases and interview suspects and witnesses on their own territory.

European leaders have expressed strong support for Switzerland in recent days. But just in case that support is misplaced, they are going to make sure their own prosecutors are watching every step very carefully.

To save face, it would be better for the Canton Valais officials to admit these conflicts of interest are serious and step aside now rather than waiting for a situation where they lose public support.

Remember the case of the greffier Mathieu Parreaux in the same canton who was accepting payments from legal protection insurance clients at the same time he was working inside the tribunal as a greffier.

Please see the rest of the JuristGate reports.

12 January, 2026 08:30PM

Awaiting trial: RTS reports Lens public safety inspector prosecuted, facing negligence charges 2022 fire

On Sunday, Swiss broadcaster RTS reported that there was another fire in the same district, Lens VS, in 2022.

In circumstances similar to the tragic fire at Le Constellation on New Year's Eve, the fire spread quickly to other parts of the building. Nobody was killed but at least one neighbour required treatment in intensive care.

The Canton Valais prosecutors felt that the rapid spread of the fire was preventable. They began a prosecution against the public safety officer for the Commune of Lens (VS). RTS has not published his name but it is easy enough to find Nicolas Grange is the suspect waiting for trial.

In a local magazine from April 2020, the authorities tell us:

Chargé de sécurité La Commune s’est dotée d’un nouveau chargé de sécurité et délégué BPA (Bureau de prévention des accidents) en la personne de Nicolas Granges. Il est joignable soit par téléphone au 027 484 25 16, soit par e-mail: nicolas.grange@lens.ch

The same magazine has an article about local schoolchildren visiting a display of aboriginal Australian art at the Opal Foundation. The Opals themselves are found in Australia. The children in the photo from 2020 would now be between 15 and 18 years of age. Maybe some of them were in Le Constellation at the time of the fire?

Fondation Opale, art aboriginal australien, Crans, Lens

 

We found the name and job title of Mr Grange in this report about improvements to the council offices.

We don't want to speculate about whether Grange is innocent or not, The more pressing question for us: if the prosecutors were aware the standard of inspections was inadequate since at least 2022, why didn't they escalate concerns about the matter to the Canton's executive?

Nicolas Grange, Lens VS, prosecution

 

Please see the rest of the JuristGate reports.

12 January, 2026 08:00PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Python Workout 2nd edition

This post is an unpublished review for Python Workout 2nd edition

Note: While I often post the reviews I write for Computing Reviews, this is a shorter review requested to me by Manning. They kindly invited me several months ago to be a reviewer for Python Workout, 2nd edition; after giving them my opinions, I am happy to widely recommend this book to interested readers.

Python is relatively an easy programming language to learn, allowing you to start coding pretty quickly. However, there’s a significant gap between being able to “throw code” in Python and truly mastering the language. To write efficient, maintainable code that’s easy for others to understand, practice is essential. And that’s often where many of us get stuck. This book begins by stating that it “is not designed to teach you Python (…) but rather to improve your understanding of Python and how to use it to solve problems.”

The author’s structure and writing style are very didactic. Each chapter addresses a different aspect of the language: from the simplest (numbers, strings, lists) to the most challenging for beginners (iterators and generators), Lerner presents several problems for us to solve as examples, emphasizing the less obvious details of each aspect.

I was invited as a reviewer in the preprint version of the book. I am now very pleased to recommend it to all interested readers. The author presents a pleasant and easy-to-read text, with a wealth of content that I am sure will improve the Python skills of all its readers.

12 January, 2026 07:23PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Reducing the size of initramfs kernel images

In the past few years, the size of the kernel images in Debian have been steadily growing. I don't see this as a problem per se, but it has been causing me trouble, as my /boot partition has become too small to accommodate two kernel images at the same time.

Since I'm running Debian Unstable on my personal systems and keep them updated with unattended-upgrade, this meant each (frequent) kernel upgrade triggered an error like this one:

update-initramfs: failed for /boot/initrd.img-6.17.11+deb14-amd64 with 1.
dpkg: error processing package initramfs-tools (--configure):
 installed initramfs-tools package post-installation script subprocess returned
 error exit status 1
Errors were encountered while processing:
 initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)

This would in turn break the automated upgrade process and require me to manually delete the currently running kernel (which works, but isn't great) to complete the upgrade.

The "obvious" solution would have been to increase the size of my /boot partition to something larger than the default 456M. Since my systems use full-disk encryption and LVM, this isn't trivial and would have required me to play Tetris and swap files back and forth using another drive.

Another solution proposed by anarcat was to migrate to systemd-boot (I'm still using grub), use Unified Kernel Images (UKI) and merge the /boot and /boot/efi partitions. Since I already have a bunch of configurations using grub and I am not too keen on systemd taking over all the things on my computer, I was somewhat reluctant.

As my computers are all configured by Puppet, I could of course have done a complete system reinstallation, but again, this was somewhat more involved than what I wanted it to be.

After looking online for a while, I finally stumbled on this blog post by Neil Brown detailing how to shrink the size of the initramfs images. With MODULES=dep my images shrunk from 188M to 41M, fixing my issue. Thanks Neil!

I was somewhat worried removing kernel modules would break something on my systems, but so far, I only had to manually load the i2c_dev module, as I need it to manage my home monitor's brightness using ddcutil.

12 January, 2026 07:13PM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.23 on CRAN: Several Updates

annoy image

A new release, now at version 0.0.22, of RcppAnnoy has arrived on CRAN, just a little short of two years since the previous release.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the Spotify music discovery algorithm. It had all the buzzwords already a decade ago: it is one of the algorithms behind (drum roll …) vector search as it finds approximate matches very quickly and also allows to persist the data.

This release contains three contributed pull requests covering a new metric, a new demo and quieter compilation, some changes to documentation and last but not least general polish including letting the vignette now use the Rcpp::asis builder.

Details of the release follow based on the NEWS file.

Changes in version 0.0.23 (2026-01-12)

  • Add dot product distance metrics (Benjamin James in #78)

  • Apply small polish to the documentation (Dirk closing #79)

  • A new demo() has been added (Samuel Granjeaud in #79)

  • Switch to Authors@R in DESCRIPTION

  • Several updates to continuous integration and README.md

  • Small enhancements to package help files

  • Updates to vignettes and references

  • Vignette now uses Rcpp::asis builder (Dirk in #80)

  • Switch one macro to a function to avoid a compiler nag (Amos Elberg in #81)

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 January, 2026 02:55PM

January 11, 2026

RApiDatetime 0.0.10 on CRAN: Maintenance

A new maintenance release of our RApiDatetime package is now on CRAN, coming just about two years after the previous maintenance release.

RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.

This release avoids use of and which are now outlawed under R-devel, and makes a number of other smaller maintenance updates. Just like the previous release, we are at OS_type: unix meaning there will not be any Windows builds at CRAN. If you would like that to change, and ideally can work in the Windows portion, do not hesitate to get in touch.

Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.10 (2026-01-11)

  • Minor maintenance for continuous integration files, README.md

  • Switch to Authors@R in DESCRIPTION

  • Use Rf_setAttrib with R 4.5.0 or later

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 January, 2026 10:57PM

RProtoBuf 0.4.25 on CRAN: Mostly Maintenance

A new maintenance release 0.4.25 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an update to a header use force by R-devel, the usual set of continunous integration updates, and a large overhaul of URLs as CRAN is now running more powerful checks. As a benefit the three vignettes have all been refreshed. they are now also delivered via the new Rcpp::asis() vignette builder that permits pre-made pdf files to be used easily.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.25 (2026-01-11)

  • Several routine updates to continuous integration script

  • Include ObjectTable.h instead of Callback.h to accommodate R 4.6.0

  • Switch vignettes to Rcpp::asis driver, update references

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 January, 2026 08:36PM

hackergotchi for Daniel Pocock

Daniel Pocock

Three most controversial Australian authors linked to St Paul’s, Coburg

Australia's three most controversial authors have now been convincingly linked to each other and the parish of St Paul's, Coburg.

Mr Edward Kelly, more commonly known as Ned, was the author of the celebrated Jerilderie letter. On top of that, it is often said and easily proven that more has been written about Mr Kelly, who only lived to be twenty five, than any other Australian, living or deceased. Mr Kelly was baptised in St Paul's parish in 1854 by none other than Monsignor Charles O'Hea. By coincidence, my father and I used to live on O'Hea Street.

The former St Paul's school building at 653 Sydney Road burnt down in 1980 and the site was taken over by the King Khalid Islamic College of Victoria (KKCV) in 1983. Randa Abdel-Fattah attended the St Clement of Rome Catholic Primary school and then moved to KKCV for high school. In 2025, the Bendigo Writer's Festival tried to un-invite Randa, all the other speakers pulled out and the festival had to be canceled. The Adelaide Writer's Festival has just tried to pull the same dirty tricks on Randa, most of the speakers and board members quit and the festival may not go ahead.

In 1978, I was baptised in St Paul's Coburg and we lived there at various points during my childhood. I never attended the local school, nonetheless, we lived about two hundred meters away from Randa's school.

If you believe the nonsense people write on social control media, you might have been fooled to think I am now Switzerland's most wanted Australian. Many people ignored the JuristGate reports in the early days of the web site. After the fire at Le Constellation in Crans-Montana, the web site has experienced an astronomical increase in traffic. It is one of the few web sites publishing the names, photos and political party affiliations of the people involved in the cover-up. When I created the Software Freedom Institute in 2021, IBM Red Hat almost immediately sued me to try and stop me publishing inconvenient truths about FSFE child labour and the Debian discussion about a possible suicide on our wedding day. It is now almost nine years since the Fellowship elected me as their representative and asked me to simply write the truth.

Ironically, Mr Kelly shot three police officers while I used to row with one of the most distinguished police officers in the state. Go figure.

The bounty offered for Mr Kelly was eight thousand pounds. To put that in perspective, they only spent fifty thousand pounds building the Upfield railway line. The amount they spent to stop the Kelly community robbing the train, and others, was sixteen percent of the budget.

In 2022, the Debianists spent over $120,000 in legal fees to try and censor my blog and have me molested by secret police. In 2026, I'm still here but don't forget Abraham Raji. After they gave all the Debian money to lawyers, volunteers like Mr Raji were asked to contribute their own money at DebConf23. Mr Raji didn't have the money to go on the day trip kayak excursion, they left him to swim alone and he drowned.

Australia's three most censored writers

Ned Kelly, Reward Randa Abdel-Fattah, Banned Daniel Pocock, censored

 

Daniel Pocock, Randa Abdel-Fattah, St Paul's, King Khalid Islamic College Victoria, Coburg, Ned Kelly, baptism

11 January, 2026 06:30PM

Jerilderie letter, full text / transcript

This is the famous letter by Mr Ned Kelly.

Dear Sir,

I wish to acquaint you with some of the occurrences of the present past and future. In or about the spring of 1870 the ground was very soft a hawker named Mr Gould got his waggon bogged between Greta and my mother's house on the eleven mile creek, the ground was that rotten it would bog a duck in places so Mr. Gould had abandon his waggon for fear of loosing his horses in the spewy ground. he was stopping at my Mother's awaiting finer or dryer weather Mr. McCormack and his wife. hawkers also were camped in Greta the mosquitoes were very bad which they generally are in a wet spring and to help them Mr. Johns had a horse called Ruita Cruta although a gelding was as clever as old Wombat or any other Stallion at running horses away and taking them on his beat which was from Greta swamp to the seven mile creek consequently he enticed McCormack's horse away from Greta.

Mr. Gould was up early feeding his horses heard a bell and seen McCormack horses for he knew the horse well he sent his boy to take him back to Greta. When McCormack's got the horse they came straight out to Goold and accused him of working the horse; this was false, and Goold was amazed at the idea I could not help laughing to hear Mrs. McCormack accusing him of using the horse after him being so kind as to send his boy to take him from the Ruta Cruta and take him back to them.

I pleaded Goulds innocence and Mrs McCormack turned on me and accused me of bringing the horse from Greta to Goolds waggon to pull him out of the bog I did not say much to the woman as my Mother was present but that same day me and my uncle was cutting calves Gould wrapped up a note and a pair of the calves testicles and gave them to me to give them to Mrs McCormack. I did not see her and I gave the parcel to a boy to give to her when she would come instead of giving it to her he gave it to her husband consequently McCormack said he would summons me I told him neither me or Gould used their horse.

he said I was a liar & he could welt me or any of my breed I was about 14 years of age but accepted the challenge and dismounting when Mrs McCormack struck my horse in the flank with a bullock's skin it jumped forward and my fist came in collision with McCormack's nose and caused him to loose his equillibrium and fall postrate I tied up my horse to finish the battle but McCormack got up and ran to the Police camp. Constable Hall asked me what the row was about I told him they accused me and Gould of using their horse and I hit him and I would do the same to him if he challenged me McCormack pulled me and swore their lies against me I was sentenced to three months for hitting him and three months for the parcel and bound to keep the peace for 12 months.

Mrs McCormack gave good substantial evidence as she is well acquainted with that place called Tasmania better known as the Dervon or Vandiemans land and McCormack being a Police man over the convicts and women being scarce released her from that land of bondage and tyranny, and they came to Victoria and are at present residents of Greta and on the 29th of March I was released from prison and came home Wild Wright came to the Eleven Mile to see Mr Gunn stopped all night and lost his mare both him and me looked all day for her and could not get her Wright who was a stranger to me was in a hurry to get back to Mansfield and I gave him another mare and he told me if I found his mare to keep her until he brought mine back I was going to Wangaratta and seen the mare and I caught her and took her with me all the Police and Detective Berrill seen her as Martains girls used to ride her about the town during several days that I stopped at Petre Martains Star Hotel in Wangaratta.

She was a chestnut mare white face docked tail very remarkable branded (M) as plain as the hands on a town clock. the property of a Telegraph Master in Mansfield he lost her on the 6th gazetted her on the 12th of March and I was a prisoner in Beechworth Gaol until the 29 of March therefore I could not have Stole the mare. I was riding the mare through Greta Constable Hall came to me and said he wanted me to sign some papers that I did not sign at Beechworth concerning my bail bonds I thought it was the truth he said the papers was at the Barracks and I had no idea he wanted to arrest me or I would have quietly rode away instead of going to the Barracks.

I was getting off when Hall caught hold of me and thought to throw me but made a mistake and came on the broad of his back himself in the dust the mare galloped away and instead of me putting my foot on Halls neck and taking his revolver and putting him in the lock up. I tried to catch the mare. Hall got up and snapped three or four caps at me and would have shot me but the colts patent refused. This is well known in Greta Hall never told me he wanted to arrest me until after he tried to shoot me when I heard the caps snapping I stood until Hall came close he had me covered and was shaking with fear and I knew he would pull the trigger before he would be game to put his hand on me so I duped, and jumped at him caught the revolver with one hand and Hall by the collar with the other.

I dare not strike him or my sureties would loose the bond money I used to trip him and let him take a mouth ful of dust now and again as he was as helpless as a big guano after leaving a dead bullock or a horse. I kept throwing him in the dust until I got him across the street the very spot where Mrs O'Briens Hotel stands now the cellar was just dug then there was some brush fencing where the post and rail was taking down and on this I threw big cowardly Hall on his belly I straddled him and rooted both spurs onto his thighs he roared like a big calf attacked by dogs and shifted several yards of the fence I got his hands at the back of his neck and trid to make him let the revolver go but he stuck to it like grim death to a dead volunteer he called for assistance to a man named Cohen and Barnett, Lewis, Thompson, Jewitt two blacksmiths who was looking on I dare not strike any of there as I was bound to keep the peace or I could have spread those curs like dung in a paddock they got ropes tied my hands and feet and Hall beat me over the head with his six chambered colts revolver nine stitches were put in some of the cuts by Dr Hastings And when Wild Wright and my mother came they could trace us across the street by the blood in the dust and which spoiled the lustre of the paint on the gate-post of the Barracks Hall sent for more Police and Doctor Hastings.

Next morning I was handcuffed a rope tied from them to my legs and to the seat of the cart and taken to Wangaratta Hall was frightened I would throw him out of the cart so he tied me whilst Constable Arthur laughed at his cowardice for it was he who escorted me and Hall to Wangaratta. I was tried and committed as Hall swore I claimed the mare the Doctor died or he would have proved Hall a perjurer Hall has been tried several times for perjury but got clear as this is no crime in the Police force it is a credit to a Policeman to convict an innocent man but any muff can pot a guilty one Halls character is well known about El Dorado and Snowy Creek and Hall was considerably in debt to Mr L.O. Brien and he was going to leave Greta Mr O. Brien seen no other chance of getting his money so there was a subscription collected for Hall and with the aid of this money he got James Murdock who was recently hung in Wagga Wagga to give false evidence against me but I was acquitted on the charge of horsestealing and on Halls and Murdocks evidence I was found guilty of receiving and got 3 years experience in Beechworth Pentridges dungeons.

this is the only charge ever proved against me Therefore I can say I never was convicted of horse or cattle stealing My Brother Dan was never charged with assaulting a woman but he was sentenced to three months without the option of a fine and one month and two pounds fine for damaging property by Mr. Butler P.M. a sentence that there is no law to uphold therefore the Minister of Justice neglected his duty in that case, but there never was such a thing as Justice in the English laws but any amount of injustice to be had. Out of over thirty head of the very best horses the land could produce I could only find one when I got my liberty. Constable Flood stole and sold the most of them to the navvies on the railway line one bay cob he stole and sold four different times the line was completed and the men all gone when I came out and Flood was shifted to Oxley. he carried on the same game there all the stray horses that was any time without an owner and not in the Police Gazette Flood used to claim He was doing a good trade at Oxley until Mr Brown of the Laceby Station got him shifted as he was always running his horses about.

Flood is different to Sergeant Steel, Strachan, Hall and the most of Police a they have got to hire cads and if they fail the Police are quite helpless. But Flood can make a cheque single-handed he is the greatest horsestealer with the exception of myself and George King I know of. I never worked on a farm a horse and saddle was never traced to me after leaving employment since February 1873 I worked as a faller at Mr J. Saunders and R Rules sawmills then for Heach and Dockendorf I never worked for less than two pound ten a week since I left Pentridge and in 1875 or 1876 I was overseer for Saunders and Rule.

Bourke's water-holes sawmills in Victoria since then I was on the King River, during my stay there I ran in a wild bull which I gave to Lydicher a farmer he sold him to Carr a Publican and Butcher who killed him for beef, sometime afterwards I was blamed for stealing this bull from James Whitty Boggy Creek I asked Whitty Oxley racecourse why he blamed me for stealing his bull he said he had found his bull and never blamed me but his son-in-law Farrell told him he heard I sold the bull to Carr not long afterwards I heard again I was blamed for stealing a mob of calves from Whitty and Farrell which I knew nothing about. I began to think they wanted me to give them something to talk about.

Therefore I started wholesale and retail horse and cattle dealing Whitty and Burns not being satisfied with all the picked land on the Boggy Creek and King River and the run of their stock on the certificate ground free and no one interfering with them paid heavy rent to the banks for all the open ground so as a poor man could keep no stock, and impounded every beast they could get, even off Government roads. If a poor man happened to leave his horse or bit of a poddy calf outside his paddock they would be impounded. I have known over 60 head of horses impounded in one day by Whitty and Burns all belonging to poor farmers they would have to leave their ploughing or harvest or other employment to go to Oxley.

When they would get there perhaps not have money enough to release them and have to give a bill of sale or borrow the money which is no easy matter. And along with this sort of work, Farrell the Policeman stole a horse from George King and had him in Whitty and Farrells Paddocks until he left the force. And all this was the cause of me and my step-father George King taking their horses and selling them to Baumgarten and Kennedy. the pick of them was taken to a good market and the culls were kept in Petersons paddock and their brands altered by me two was sold to Kennedy and the rest to Baumgarten who were strangers to me and I believe honest men.

They paid me full value for the horses and could not have known they were stolen. no person had anything to do with the stealing and selling of the horses but me and George King. William Cooke who was convicted for Whittys horses was innocent he was not in my company at Petersons. But it is not the place of the Police to convict guilty men as it is by them they get their living had the right parties been convicted it would have been a bad job for the Police as Berry would have sacked a great many of them only I came to their aid and kept them in their bilits and good employment and got them double pay and yet the ungrateful articles convicted my mother and an infant my brother-in-law and another man who was innocent and still annoy my brothers and sisters and the ignorant unicorns even threaten to shoot myself But as soon as I am dead they will be heels up in the muroo.

there will be no more police required they will be sacked and supplanted by soldiers on low pay in the towns and special constables made of some of the farmers to make up for this double pay and expence. It will pay Government to give those people who are suffering innocence, justice and liberty. if not I will be compelled to show some colonial stratagem which will open the eyes of not only the Victoria Police and inhabitants but also the whole British army and now doubt they will acknowledge their hounds were barking at the wrong stump.

And that Fitzpatrick will be the cause of greater slaughter to the Union Jack than Saint Patrick was to the snakes and toads in Ireland. The Queen of England was as guilty as Baumgarten and Kennedy Williamson and Skillion of what they were convicted for When the horses were found on the Murray River I wrote a letter to Mr Swanhill of Lake Rowan to acquaint the Auctioneer and to advertize my horses for sale I brought some of them to that place but did not sell I sold some of them in Benalla Melbourne and other places and left the colony and became a rambling gambler soon after I left there was a warrant for me and the Police searched the place and watched night and day for two or three weeks and when they could not snare me they got a warrant against my brother Dan And on the 15 of April Fitzpatrick came to the Eleven Mile Creek to arrest him he had some conversation with a horse dealer whom he swore was William Skillion this man was not called in Beechworth, besides several other Witnesses, who alone could have proved Fitzpatricks falsehood after leaving this man he went to the house asked was Dan in Dan came out.

I hear previous to this Fitzpatrick had some conversation with Williamson on the hill. he asked Dan to come to Greta with him as he had a warrant for him for stealing Whitty's horses Dan said all right they both went inside Dan was having something to eat his mother asked Fitzpatrick what he wanted Dan for. the trooper said he had a warrant for him Dan then asked him to produce it he said it was only a telegram sent from Chiltren but Sergeant Whelan ordered him to releive Steel at Greta and call and arrest Dan and take him into Wangaratta next morning and get him remanded Dans mother said Dan need not go without a warrant unless he liked and that the trooper had no business on her premises without some Authority besides his own word. The trooper pulled out his revolver and said he would blow her brains out if she interfered.

In the arrest she told him it was a good job for him Ned was not there or he would ram the revolver down his throat Dan looked out and said Ned is coming now, the trooper being off his guard looked out and when Dan got his attention drawn he dropped the knife and fork which showed he had no murderous intent and slapped heenans hug on him took his revolver and kept him there until Skillion and Ryan came with horses which Dan sold that night. The trooper left and invented some scheme to say that he got shot which any man can see is false, he told Dan to clear out that Sergeant Steel and Detective Brown and Strachan would be there before morning Strachan had been over the Murray trying to get up a case against him and they would convict him if they caught him as the stock society offored an enticement for witnesses to swear anything and the germans over the Murray would swear to the wrong man as well as the right.

Next day Williamson and my mother was arrested and Skillion the day after who was not there at all at the time of the row which can be proved by 8 or 9 witnesses And the Police got great credit and praise in the papers for arresting the mother of 12 children one an infant on her breast and those two quiet hard working innocent men who would not know the difference a revolver and a saucepan handle and kept them six months awaiting trial and then convicted them on the evidence of the meanest article that ever the sun shone on it seems that the jury was well chosen by the Police as there was a discharged Sergeant amongst them which is contrary to law they thought it impossible for a Policeman to swear a lie but I can assure them it is by that means and hiring cads they get promoted I have heard from a trooper that he never knew Fitzpatrick to be one night sober and that he sold his sister to a chinaman but he looks a young strapping rather genteel more fit to be a starcher to a laundress than a Policeman.

For to a keen observer he has the wrong appearance or a manly heart the deceit and cowardice is too plain to be seen in the puny cabbage hearted looking face. I heard nothing of this transaction until very close on the trial I being then over 400 miles from Greta when I heard I was outlawed and a hundred pound reward for me for shooting at a trooper in Victoria and a hundred pound for any man that could prove a conviction of horse-stealing against me so I came back to Victoria knew I would get no justice if I gave myself up I enquired after my brother Dan and found him digging on Bullock Creek heard how the Police used to be blowing that they would not ask me to stand they would shoot me first and then cry surrender and how they used to rush into the house upset all the milk dishes break tins of eggs empty the flour out of the bags on to the ground and even the meat out of the cask and destroy all the provisions and shove the girls in front of them into the rooms like dogs so as if anyone was there they would shoot the girls first but they knew well I was not there or I would have scattered their blood and brains like rain I would manure the Eleven mile with their bloated carcasses and yet remember there is not one drop of murderous blood in my Veins.

Superintendent Smith used to say to my sisters, see all the men I have out today I will have as many more tomorrow and we will blow him into pieces as small as paper that is in our guns Detective Ward and Constable Hayes took out their revolvers and threatened to shoot the girls and children in Mrs Skillions absence the greatest ruffians and murderers no matter how deprived would not be guilty of such a cowardly action, and this sort of cruelty and disgraceful and cowardly conduct to my brothers and sisters who had no protection coupled with the conviction of my mother and those men certainly made my blood boil as I dont think there is a man born could have the patience to suffer it as long as I did or ever allow his blood to get cold while such insults as these were unavenged and yet in every paper that is printed I am called the blackest and coldest blooded murderer ever on record.

But if I hear any more of it I will not exactly show them what cold blooded murder is but wholesale and retail slaughter something different to shooting three troopers in self defence and robbing a bank. I would have been rather hot-blooded to throw down my rifle and let them shoot me and my innocent brother, they were not satisfied with frightening my sisters night and day and destroying their provisions and lagging my mother and infant and those innocent men but should follow me and my brother into the wilds where he had been quietly digging neither molesting or inter-fering with anyone he was making good wages as the creek is very rich within half a mile from where I shot Kennedy.

I was not there long and on the 25 of October I came on Police tracks between Table top and the bogs. I crossed them and returning in the evening I came on a dif-ferent lot of tracks making for the shingle hut I went to our camp and told my brother and his two mates me and my brother went and found their camp at the shingle hut about a mile from my brothers house saw they carried long firearms and we knew our doom was sealed if we could not beat those before the others would come As I knew the other party of Police would soon join them and if they came on us at our camp they would shoot us down like dogs at our work as we had only two guns. we thought it best to try and bail those up take their fire-arms and ammunition and horses and we could stand a chance with the rest We approached the spring as close as we could get to the camp as the intervening space being clear ground and no battery We saw two men at the logs they got up and one took a double barreled fowling-piece and fetched a horse down and hobbled him at the tent we thought there were more men in the tent asleep those being on sentry we could have shot those two men without speaking but not wishing to take their lives we waited McIntyre laid the gun against a stump and Lonigan sat on the log I advanced, my brother Dan keepin McIntyre covered which he took to be constable Flood and had he not obeyed my orders, or at-tempted to reach for the gun or draw his revolver he would have been shot dead but when I called on them to throw up their hands McIntyre obeyed and Lonigan ran some six or seven yards to a battery of logs insted of dropping behind the one he was sitting on, he had just got to the logs and put his head up to take aim when I shot him that instant or he would have shot me as I took him to be Strachan the man who said he would not ask me to stand he would shoot me first like a dog.

But it happened to be Lonigan the man who in company with Sergeant Whelan Fitzpatrick and King the Boot maker and constable O. Day that tried to put a pair of hand-cuffs on me in Benalla but could not and had to allow McInnis the miller to put them on, previous to Fitzpatrick swear-ing he was shot, I was fined two pounds for hitting Fitzpatrick and two pounds for not allowing five curs like Sergeant Whelan O. Day Fitz-patrick King and Lonigan who caught me by the privates and would have sent me to Kingdom come only I was not ready and he is the man that blowed before he left Violet Town if Ned Kelly was to be shot he was the man would shoot him and no doubt he would shoot me even if I threw up my arms and laid down as he knew four of them could not arrest me single-handed not to talk of the rest of my mates, also either me or him would have to die, this he knew well therefore he had a right to keep out of my road, Fitzpatrick is the only one I hit out of the five in Benalla this shows my feeling towards him as he said we were good friends & even swore it but he was the biggest enemy I had in the country with the exception of Lonigan and he can be thankful I was not there when he took a revolver and threatened to shoot my mother in her own house it is not fire three shots and miss him at a yard and a half I dont think I would use a revolver to shoot a man like him when I was within a yard and a half of him or attempt to fire into a house where my mother brothers and sisters was. and according to Fitzpatricks statement all around him a man that is such a bad shot as to miss a man three times at a yard and a half would never attempt to fire into a house among a house full of women and children while I had a pairs of arms and bunch of fives on the end of them that never failed to peg out anything they came in contact with and Fitzpatrick knew the weight of one of them only too well, as it run against him once in Benalla, and cost me two pound odd as he is very subject to fainting.

As soon as I shot Lonigan he jumped up and staggered some distance from the logs with his hands raised and then fell he surrendered but too late I asked McIntyre who was in the tent he replied no one. I advanced and took possession of their two revolvers and fowling-piece which I loaded with bullets instead of shot. I asked McIntyre where his mates was he said they had gone down the creek, and he did not expect them that night he asked me was I going to shoot him and his mates. I told him no.

I would shoot no man if he gave up his arms and leave the force he said the police all knew Fitzpatrick had wronged us. and he intended to leave the force, as he had bad health, and his life was insured, he told me he intended going home and that Kennedy and Scanlan were out looking for our camp and also about the other Police he told me the N.S.W Police had shot a man for shooting Sergeant Walling I told him if they did, they had shot the wrong man And I expect your gang came to do the same with me he said no they did not come to shoot me they came to apprehend me I asked him what they carried spenceir rifles and breech loading fowling pieces and so much ammunition for as the Police was only supposed to carry one revolver and 6 cartridges in the revolver but they had eighteen rounds of revolver cartridges each three dozen for the fowling piece and twenty one spenceir-rifle cartridges and God knows how many they had away with the rifle this looked as if they meant not only to shoot me only to riddle me but I dont know either Kennedy Scanlan or him and had nothing against them, he said he would get them to give up their arms if I would not shoot them as I could not blame them, they had to do their duty I said I did not blame them for doing honest duty but I could not suffer them blowing me to pieces in my own native land and they knew Fitzpatrick wronged us and why not make it public and convict him but no they would rather riddle poor unfortunate creoles.

but they will rue the day ever Fitzpatrick got among them, Our two mates came over when they heard the shot fired but went back again for fear the Police might come to our camp while we were all away and manure bullock flat with us on our arrival. I stopped at the logs and Dan went back to the spring for fear the tropers would come in that way but I soon heard them coming up the creek. I told McIntyre to tell them to give up their arms, he spoke to Kennedy who was some distance in front of Scanlan he reached for his revolver and jumped off, on the off side of his horse and got behind a tree when I called on them to throw up their arms and Scanlan who carried the rifle slewed his horse around to gallop away but the horse would not go and as quick as thought fired at me with the rifle without unslinging it and was in the act of firing again when I had to shoot him and he fell from his horse.

I could have shot them without speaking but their lives was no good to me. McIntyre jumped on Kennedys horse and I allowed him to go as I did not like to shoot him after he surrendered or I would have shot him as he was between me and Kennedy therefore I could not shoot Kennedy without shooting him first. Kennedy kept firing from behind the tree my brother Dan advanced and Kennedy ran I followed him he stopped behind another tree and fired again.

I shot him in the arm pit and he dropped his revolver and ran I fired again with the gun as he slewed around to surrender I did not know he had dropped his revolver. the bullet passed through the right side of his chest & he could not live or I would have let him go had they been my own brother I could not help shooting there or else let them shoot me which they would have done had their bullets been directed as they intended them. But as for handcuffing Kennedy to a tree or cutting his ear off or brutally treating any of them, is a falsehood, if Kennedys ear was cut off it was not done by me and none of my mates was near him after he was shot I put his cloak over him and left him as well as I could and were they my own brothers I could not have been more sorry for them this cannot be called wilful murder for I was compelled to shoot them, or lie down and let them shoot me it would not be wilful murder if they packed our remains in, shattered into a mass of animated gore to Mansfield, they would have got great praise and credit as well as promotion but I am reconed a horrid brute because I had not been cowardly enough to lie down for them under such trying circumstances and insults to my people certainly their wives and children are to be pitied but they must remember those men came into the bush with the intention of scattering pieces of me and my brother all over the bush and yet they know and acknowledge I have been wronged and my mother and four or five men lagged innocent and is my brothers and sisters and my mother not to be pitied also who has no alternative only to put up with the brutal and cowardly conduct of a parcel of big ugly fat-necked wombat headed big bellied magpie legged narrow hipped splaw-footed sons of Irish Bailiffs or english landlords which is better known as Officers of Justice or Victorian Police who some calls honest gentlemen but I would like to know what business an honest man would have in the Police as it is an old saying It takes a rogue to catch a rogue and a man that knows nothing about roguery would never enter the force an take an oath to arrest brother sister father or mother if required and to have a case and conviction if possible.

Any man knows it is possible to swear a lie and if a policeman looses a conviction for the sake of swearing a lie he has broke his oath therefore he is a perjurer either ways. A Policeman is a disgrace to his country, not alone to the mother that suckled him, in the first place he is a rogue in his heart but too cowardly to follow it up without having the force to disguise it. next he is traitor to his country ancestors and religion as they were all catholics before the Saxons and Cranmore yoke held sway since then they were perse cuted massacreed thrown into martrydom and tortured beyond the ideas of the present generation What would people say if they saw a strapping big lump of an Irishman shepherding sheep for fifteen bob a week or tailing turkeys in Tallarook ranges for a smile from Julia or even begging his tucker, they would say he ought to be ashamed of himself and tar-and-feather him.

But he would be a king to a policeman who for a lazy loafing cowardly bilit left the ash corner deserted the shamrock, the emblem of true wit and beauty to serve under a flag and nation that has destroyed massacreed and murdered their fore-fathers by the greatest of torture as rolling them down hill in spiked barrels pulling their toe and finger nails and on the wheel. and every torture imaginable more was transported to Van Diemand's Land to pine their young lives away in starvation and misery among tyrants worse than the promised hell itself all of true blood bone and beauty, that was not murdered on their own soil, or had fled to America or other countries to bloom again another day, were doomed to Port Mcquarie Toweringabbie norfolk island and Emu plains and in those places of tyrany and condemnation many a blooming Irishman rather than subdue to the Saxon yoke Were flogged to death and bravely died in servile chains but true to the shamrock and a credit to Paddys land What would people say if I became a policeman and took an oath to arrest my brothers and sisters & relations and convict them by fair or foul means after the conviction of my mother and the persecutions and insults offered to myself and people Would they say I was a decent gentleman, and yet a police-man is still in worse and guilty of meaner actions than that The Queen must surely be proud of such herioc men as the Police and Irish soldiers as It takes eight or eleven of the biggest mud crushers in Melbourne to take one poor little half starved larrakin to a watch house.

I have seen as many as eleven, big & ugly enough to lift Mount Macedon out of a crab hole more like the species of a baboon or Guerilla than a man. actually come into a court house and swear they could not arrest one eight stone larrakin and them armed with battens and neddies without some civilians assistance and some of them going to the hospital from the affects of hits from the fists of the larrakin and the Magistrate would send the poor little Larrakin into a dungeon for being a better man than such a parcel of armed curs. What would England do if America declared war and hoisted a green flag as its all Irishmen that has got command of her armies forts and batteries even her very life guards and beef tasters are Irish would they not slew around and fight her with their own arms for the sake of the colour they dare not wear for years. and to reinstate it and rise old Erins isle once more, from the pressure and tyrannism of the English yoke, which has kept it in poverty and starvation, and caused them to wear the enemys coats.

What else can England expect. Is there not big fat-necked Unicorns enough paid to torment and drive me to do thing which I dont wish to do, without the public assisting them I have never interefered with any person unless they deserved it, and yet there are civilians who take firearms against me, for what reason I do not know, unless they want me to turn on them and exterminate them without medicine. I shall be compelled to make an example of some of them if they cannot find no other employment If I had robbed and plundered ravished and murdered everything I met young and old rich and poor. the public could not do any more than take firearms and Assisting the police as they have done, but by the light that shines pegged on an ant-bed with their bellies opened their fat taken out rendered and poured down their throat boiling hot will be fool to what pleasure I will give some of them and any person aiding or harbouring or assisting the Police in any way whatever or employing any person whom they know to be a detective or cad or those who would be so deprived as to take blood money will be outlawed and declared unfit to be allowed human buriel their property either consumed or confiscated and them theirs and all belonging to them exterminated off the face of the earth, the enemy I cannot catch myself I shall give a payable reward for.

I would like to know who put that article that reminds me of a poodle dog half clipped in the lion fashion, called Brooke E. Smith Superin-tendent of Police he knows as much about commanding Police as Cap-tain Standish does about mustering mosquitoes and boiling them down for their fat on the back blocks of the Lachlan for he has a head like a turnip a stiff neck as big as his shoulders narrow hipped and pointed towards the feet like a vine stake and if there is any one to be called a murderer regarding Kennedy, Scanlan and Lonigan it is that mis-placed poodle he gets as much pay as a dozen good troopers, if there is any good in them, and what does he do for it he cannot look behind him without turning his whole frame it takes three or four police to keep sentry while he sleeps in Wangaratta, for fear of body snatchers do they think he is a superior animal to the men that has to guard him if so why not send the men that gets big pay and reconed superior to the common police after me and you shall soon save the country of high salaries to men that is fit for nothing else but getting better men than him self shot and sending orphan children to the industrial school to make prostitutes and cads of them for the Detectives and other evil dis-posed persons.

Send the high paid and men that received big salaries for years in a gang by themselves after me, As it makes no difference to them but it will give them a chance of showing whether they are worth more pay than a common trooper or not and I think the Public will soon find they are only in the road of good men and obtaining money under false pretences, I do not call McIntyre a coward for I reckon he is as game a man as wears the jacket as he had the presence of mind to know his position, directly as he was spoken to, and only foolishness to disobey, it was cowardice that made Lonigan and the others fight it is only foolhardiness to disobey an outlaw as any Police-man or other man who do not throw up their arms directly as I call on them knows the consequence which is a speedy dispatch to Kingdom Come, I wish those men who joined the stock protection society to with-draw their money and give it and as much more to the widows and orphans and poor of Greta district wher I spent and will again spend many a happy day fearless free and bold as it only aids the police to procure false witnesses and go whacks with men to steal horses and lag innocent men it would suit them far better to subscribe a sum and give it to the poor of their district and there is no fear of anyone stealing their property for no man could steal their horses without the knowledge of the poor if any man was mean enough to steal their property the poor would rise out to a man and find them if they were on the face of the earth it will always pay a rich man to be liberal with the poor and make as little enemies as he can as he shall find if the poor is on his side he shall loose nothing by it, If they depend in the police they shall be drove to destruction.

As they can not and will not protect them if duffing and bushranging were abolished the police would have to cadge for their living I speak from experience as I have sold horses and cattle innumerable and yet eight head of the culls is all ever was found I never was interfered with whilst I kept up this successful trade. I give fair warning to all those who has reason to fear me to sell out and give P10 out of every hundred towards the widow and orphan fund and do not attempt to reside in Victoria but as short a time as possible after reading this notice, neglect this and abide by the consequences, which shall be worse than the rust in the wheat in Victoria or the druth of a dry season to the grasshoppers in New South Wales I do not wish to give the order full force without giving timely warning. but I am a widows son outlawed and my orders must be obeyed.

11 January, 2026 06:00PM

British citizens, Palestine Action suspects: claiming political asylum in Australia

On 5 July 2025, the British Government proscribed Palestine Action as a terrorist group.

On 2 August 2025, an Australian judge in Sydney authorized the Palestine Action Group to march across one of our most iconic landmarks, the Sydney Harbour Bridge.

Thousands of brave police had to be mobilized at the last minute to protect the public from what the British perceive to be terrorism.

In the UK, over 2,500 members of the public were arrested so far for supporting the terrorist group. The Australian police were able to minimize the cost to taxpayers by shutting down the march with a text message.

In UK jails, supporters of the terrorist group are now on hunger strike, with an imminent risk of death.

This creates an interesting opportunity for an arbitrage trade: when the UK and Australian justice officials have diverged so dramatically, banning a group in the UK while explicitly authorizing them in an Australian judgment, there is a legal right for people to claim political asylum.

While the first British settlers arrived on boats in 1788, this practice is now banned. Asylum seekers arriving by boat are typically placed in detention.

Asylum seekers who arrive on a tourist visa are not subject to the mandatory detention scheme. Any British citizen who arrives on a tourist visa and decides to file their asylum claim after being allowed through the passport controls will be allowed to stay in Australia for as long as it takes for Australia to evaluate their case. Obviously, Australia will not be able to send people back to the UK as long as the hunger strike is in progress.

Whether we like it or not, the fact that an Australian judge has authorized Palestine Action sends a green light to anybody who feels they are persecuted for supporting the group in the UK.

If people disagree with that, Australia will need to ban all British visitors or make some other changes to asylum law to prevent people exploiting the situation.

With the UK in a big freeze right now, the Palestine Action message has found new ways to get out. British authorities are warming up a jail cell for whoever keeps doing this:

Palestine Action, snow

 

Read more about political asylum.

11 January, 2026 01:00PM

Russell Coker

Terminal Emulator Security

I just read this informative article on ANSI terminal security [1]. The author has written a tool named vt-houdini for testing for these issues [2]. They used to host an instance on their server but appear to have stopped it. When you run that tool you can ssh to the system in question and without needing a password you are connected and the server probes your terminal emulator for vulnerabilities. The versions of Kitty and Konsole in Debian/Trixie have just passed those tests on my system.

This will always be a potential security problem due to the purpose of a terminal emulator. A terminal emulator will often display untrusted data and often data which is known to come from hostile sources (EG logs of attempted attacks). So what could be done in this regard?

Memory Protection

Due to the complexity of terminal emulation there is the possibility of buffer overflows and other memory management issues that could be used to compromise the emulator.

The Fil-C compiler is an interesting project [3], it compiles existing C/C++ code with memory checks. It is reported to have no noticeable impact on the performance of the bash shell which sounds like a useful option to address some of these issues as shell security issues are connected to terminal security issues. The performance impact on a terminal emulator would be likely to be more noticeable. Also note that Fil-C compilation apparently requires compiling all libraries with it, this isn’t a problem for bash as the only libraries it uses nowadays are libtinfo and libc. The kitty terminal emulator doesn’t have many libraries but libpython is one of them, it’s an essential part of Kitty and it is a complex library to compile in a different way. Konsole has about 160 libraries and it isn’t plausible to recompile so many libraries at this time.

Choosing a terminal emulator that has a simpler design might help in this regard. Emulators that call libraries for 3D effects etc and native support for displaying in-line graphics have a much greater attack surface.

Access Control

A terminal emulator could be run in a container to prevent it from doing any damage if it is compromised. But the terminal emulator will have full control over the shell it runs and if the shell has access needed to allow commands like scp/rsync to do what is expected of them then that means that no useful level of containment is possible.

It would be possible to run a terminal emulator in a container for the purpose of connecting to an insecure or hostile system and not allow scp/rsync to/from any directory other than /tmp (or other directories to use for sharing files). You could run “exec ssh $SERVER” so the terminal emulator session ends when the ssh connection ends.

Conclusion

There aren’t good solutions to the problems of terminal emulation security. But testing every terminal emulator with vt-houdini and fuzzing the popular ones would be a good start.

Qubes level isolation will help things in some situations, but if you need to connect to a server with privileged access to read log files containing potentially hostile data (which is a common sysadmin use case) then there aren’t good options.

11 January, 2026 03:46AM by etbe

January 09, 2026

Simon Josefsson

Debian Taco – Towards a GitSecDevOps Debian

One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I’d like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I’ll summarize some pending work.

Debian Taco, or TacOS, is a GitSecDevOps rebuild of Debian GNU/Linux.

The Debian Taco project publish rebuilt binary packages, package repository metadata (InRelease, Packages, etc), container images, cloud images and live images.

All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian.

No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)

Goals

The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish diffoscope output with differences.

The idea is to further categorize all artifact differences into one of the following categories:

1) An obvious bug in Debian. For example, if a package does not build reproducible.

2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package.

3) Something else. This would be input for further research and consideration. This category also include things where it isn’t obvious if it is a bug in Debian or in TacOS. Known examples:

3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some discussion about idempotent rebuilds.

Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.

Debian Taco Archive

The Debian Taco Archive project generate and publish the package archive (dists/tacos-trixie/InRelease, dists/tacos-trixie/main/binary-amd64/Packages.gz, pool/* etc), similar to what is published at https://deb.debian.org/debian/.

The output of the Debian Taco Archive is available from https://debdistutils.gitlab.io/tacos/archive/.

Debian Taco Container Images

The Debian Taco Container Images project provide container images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.

Debian Taco Cloud Images

The Debian Taco Cloud Images project provide cloud images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

Launch and install Debian Taco for your cloud environment!

Debian Taco Live Images

The Debian Taco Live Images project provide live images of Debian Taco for trixie, forky and sid on the amd64 and arm64 architectures.

These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.

Debian Taco Build Images and Packages

Packages are built using debdistbuild, which was introduced in a blog about Build Debian in a GitLab Pipeline.

The first step is to prepare build images, which is done by the Debian Taco Build Images project. They are similar to the Debian Taco containers but have build-essential and debdistbuild installed on them.

Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only trixie-amd64 is available. That project has built some essential early packages like base-files, debian-archive-keyring and hostname. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming.

I’ve yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier rebuilt around 7000 packages from Trixie on amd64, so I know that the method easily scales.

Remaining work

Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item.

Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian.

One goal was to build packages from the tag2upload browse.dgit.debian.org archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.

GitLab or Codeberg?

Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I’d like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via Codeberg as soon as I manage to deploy reliable Forgejo runners.

The important aspects that are required are:

1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I’ve successfully used Codeberg Pages to publish the OATH Toolkit homepage homepage. Glueing this together seems feasible.

2) Container Registry. It seems Forgejo supports a Container Registry but I’ve not worked with it at Codeberg to understand if there are any limitations.

3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms.

I hope to continue this work and explaining more details in a series of posts, stay tuned!

09 January, 2026 04:33PM by simon

Russell Coker

LEAF ZE1 After 6 Months

About 6 months ago I got a Nissan LEAF ZE1 (2019 model) [1]. Generally it’s going well and I’m happy with most things about it.

One issue is that as there isn’t a lot of weight in the front with the batteries in the centre of the car the front wheels slip easily when accelerating. It’s a minor thing but a good reason for wanting AWD in an electric car.

When I got the car I got two charging devices, the one to charge from a regular 240V 10A power point (often referred to as a “granny charger”) and a cable with a special EV charging connector on each end. The cable with an EV connector on each end is designed for charging that’s faster than the “granny charger” but not as fast as the rapid chargers which have the cable connected to the supply so the cable temperature can be monitored and/or controlled. That cable can be used if you get a fast charger setup at your home (which I never plan to do) and apparently at some small hotels and other places with home-style EV charging. I’m considering just selling that cable on ebay as I don’t think I have any need to personally own a cable other than the “granny charger”.

The key fob for the LEAF has a battery installed, it’s either CR2032 or CR2025 – mine has CR2025. Some reports on the Internet suggest that you can stuff a CR2032 battery in anyway but that didn’t work for me as the thickness of the battery stopped some of the contacts from making a good connection. I think I could have got it going by putting some metal in between but the batteries aren’t expensive enough to make it worth the effort and risk. It would be nice if I could use batteries from my stockpile of CR2032 batteries that came from old PCs but I can afford to spend a few dollars on it.

My driveway is short and if I left the charger out it would be visible from the street and at risk of being stolen. I’m thinking of chaining the charger to a tree and having some sort of waterproof enclosure for it so I don’t have to go to the effort of taking it out of the boot every time I use it. Then I could also configure the car to only charge during the peak sunlight hours when the solar power my home feeds into the grid has a negative price (we have so much solar power that it’s causing grid problems).

The cruise control is a pain to use, so much so that I haven’t yet got it to work usefully ever. The features look good in the documentation but in practice it’s not as good as the Kia one I’ve used previously where I could just press one button to turn it on, another button to set the current speed as the cruise control speed, and then just have it work.

The electronic compass built in to the dash turned out to be surprisingly useful. I regret not gluing a compass to the dash of previous cars. One example is when I start google navigation for a journey and it says “go South on street X” and I need to know which direction is South so I don’t start in the wrong direction. Another example is when I know that I’m North of a major road that I need to take to get to my destination so I just need to go roughly South and that is enough to get me to a road I recognise.

In the past when there is a bird in the way I don’t do anything different, I keep driving at the same speed and rely on the bird to see me and move out of the way. Birds have faster reactions than humans and have evolved to move at the speeds cars travel on all roads other than freeways, also birds that are on roads are usually ones that have an eye in each side of their head so they can’t not see my car approaching. For decades this has worked, but recently a bird just stood on the road and got squashed. So I guess that I should honk when there’s birds on the road.

Generally everything about the car is fine and I’m happy to keep driving it.

09 January, 2026 03:32AM by etbe

January 08, 2026

Dima Kogan

Meshroom packaged for Debian

Like the title says, I just packaged Meshroom (and all the adjacent dependencies) for Debian! This is a fancy photogrammetry toolkit that uses modern software development methods. "Modern" meaning that it has a multitude of dependencies that come from lots of disparate places, which make it impossible for a mere mortal to build the thing. The Linux "installer" is 13GB and probably is some sort of container, or something.

But now, if you have a Debian/sid box with the non-free repos enabled, you can

sudo apt install meshroom

And then you can generate and 3D-print a life-size, geometrically-accurate statue of your cat. The colmap package does a similar thing, and has been in Debian for a while. I think it can't do as many things, but it's good to have both tools easily available.

These packages are all in contrib, because they depend on a number of non-free things, most notably CUDA.

This is currently in Debian/sid, but should be picked up by the downstream distros as they're released. The next noteworthy one is Ubuntu 26.04. Testing and feedback welcome.

08 January, 2026 11:34PM by Dima Kogan

Reproducible Builds

Reproducible Builds in December 2025

Welcome to the December 2025 from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. New orig-check service to validate Debian upstream tarballs
  2. Distribution work
  3. disorderfs updated to FUSE 3
  4. Mailing list updates
  5. Three new academic papers published
  6. Website updates
  7. Upstream patches

New orig-check service to validate Debian upstream tarballs

This month, Debian Developer Lucas Nussbaum announced the orig-check service, which attempts to automatically reproduce the generation upstream tarballs (ie. the “original source” component of a Debian source package), comparing that to the upstream tarball actually shipped with Debian.

As of the time of writing, it is possible for a Debian developer to upload a source archive that does not actually correspond to upstream’s version. Whilst this is not inherently malicious (it typically indicates some tooling/process issue), the very possibility that a maintainer’s version may differ potentially permits a maintainer to make (malicious) changes that would be misattributed to upstream.

This service therefore nicely complements the whatsrc.org service, which was reported in our reports for both April and August. The orig-check is dedicated to Lunar, who sadly passed away a year ago.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at making the Arch Linux WSL image bit-for-bit reproducible. Robin also shared some implementation details and future related work on our mailing list.

Continuing a series reported in these reports for March, April and July 2025 (etc.), Simon Josefsson has published another interesting article this month, itself a followup to a post Simon published in December 2024 regarding GNU Guix Container Images that are hosted on GitLab.

In Debian this month, Micha Lenk posted to the debian-backports-announce mailing list with the news that the Backports archive will now discard binaries generated and uploaded by maintainers: “The benefit is that all binary packages [will] get built by the Debian buildds before we distribute them within the archive.”

Felix Moessbauer of Siemens then filed a bug in the Debian bug tracker to signal their intention to package debsbom, a software bill of materials (SBOM) generator for distributions based on Debian. This generated a discussion on the bug inquiring about the output format as well as a question about how these SBOMs might be distributed.

Holger Levsen merged a number of significant changes written by Alper Nebi Yasak to the Debian Installer in order to improve its reproducibility. As noted in Alper’s merge request, “These are the reproducibility fixes I looked into before bookworm release, but was a bit afraid to send as it’s just before the release, because the things like the xorriso conversion changes the content of the files to try to make them reproducible.”

In addition, 76 reviews of Debian packages were added, 8 were updated and 27 were removed this month adding to our knowledge about identified issues. A new different_package_content_when_built_with_nocheck issue type was added by Holger Levsen. []

Arnout Engelen posted to our mailing list reporting that they successfully reproduced the NixOS minimal installation ISO for the 25.11 release without relying on a pre-compiled package archive, with more details on their blog.

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.


disorderfs updated to FUSE 3

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues.

This month, however, Roland Clobus upgraded disorderfs* from FUSE 2 to FUSE 3 after its package automatically got removed from Debian testing. Some tests in Debian currently require disorderfs to make the Debian live images reproducible, although disorderfs is not a Debian-specific tool.


Mailing list updates

On our mailing list this month:

  • Luca Di Maio announced stampdalf, a “filesystem timestamp preservation” tool that wraps “arbitrary commands and ensures filesystem timestamp reproducibility”:

    stampdalf allows you to run any command that modifies files in a directory tree, then automatically resets all timestamps back to their original values. Any new files created during command execution are set to [the UNIX epoch] or a custom timestamp via SOURCE_DATE_EPOCH.

    The project’s GitHub page helpfully reveals that the project is “pronounced: stamp-dalf (stamp like time-stamp, dalf like Gandalf the wizard)” as “it’s a wizard of time and stamps”.)

  • Lastly, Reproducible Builds developer cen1 posted to our list announcing that “early/experimental/alpha” support for FreeBSD was added to rebuilderd. In their post, cen1 reports that the “initial builds are in progress and look quite decent”. cen1 also interestingly notes that “since the upstream is currently not technically reproducible I had to relax the bit-for-bit identical requirement of rebuilderd [—] I consider the pkg to be reproducible if the tar is content-identical (via diffoscope), ignoring timestamps and some of the manifest files.”.


Three new academic papers published

Yogya Gamage and Benoit Baudry of Université de Montréal, Canada together with Deepika Tiwari and Martin Monperrus of KTH Royal Institute of Technology, Sweden published a paper on The Design Space of Lockfiles Across Package Managers:

Most package managers also generate a lockfile, which records the exact set of resolved dependency versions. Lockfiles are used to reduce build times; to verify the integrity of resolved packages; and to support build reproducibility across environments and time. Despite these beneficial features, developers often struggle with their maintenance, usage, and interpretation. In this study, we unveil the major challenges related to lockfiles, such that future researchers and engineers can address them. […]

A PDF of their paper is available online.

Benoit Baudry also posted an announcement to our mailing list, which generated a number of replies.


Betul Gokkaya, Leonardo Aniello and Basel Halak of the University of Southampton then published a paper on the A taxonomy of attacks, mitigations and risk assessment strategies within the software supply chain:

While existing studies primarily focus on software supply chain attacks’ prevention and detection methods, there is a need for a broad overview of attacks and comprehensive risk assessment for software supply chain security. This study conducts a systematic literature review to fill this gap. By analyzing 96 papers published between 2015-2023, we identified 19 distinct SSC attacks, including 6 novel attacks highlighted in recent studies. Additionally, we developed 25 specific security controls and established a precisely mapped taxonomy that transparently links each control to one or more specific attacks. […]

A PDF of the paper is available online via the article’s canonical page.


Aman Sharma and Martin Monperrus of the KTH Royal Institute of Technology, Sweden along with Benoit Baudry of Université de Montréal, Canada published a paper this month on Causes and Canonicalization of Unreproducible Builds in Java. The abstract of the paper is as follows:

[Achieving] reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. […]

A PDF of the paper is available online.


Website updates

Once again, there were a number of improvements made to our website this month including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 January, 2026 10:51PM

Sven Hoexter

Moving from hexchat to Halloy

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

08 January, 2026 10:35AM

Swiss JuristGate

Burn victims treated worse than suspects: Crans-Montana double-whammy, years of arguments over medical bills and compensation

The Nazis claimed they were humane killers. Victims of Zyklon B in the gas chambers were supposed to die within two minutes.

Those who survived the fire at Le Constellation, Crans-Montana, Canton Valais are set to face years of arguments about their medical bills and compensation.

Authorities have revealed the vast majority of victims have inhaled hot and poisonous fumes, burning their airways and lungs and requiring them to be placed into an induced coma and intubated.

We don't know if they are having dreams or nightmares in the coma. What we do know is that they will wake up to the nightmare of Swiss medical bills and insurance companies.

Tages Anzeiger has discussed the scars of medical debt with Swiss jurist Rolf Steinegger. Many other media outlets repeated his comments (also in Le Temps):

For victims, the process can be very grueling.

The victims of the fire in Crans-Montana could face a legal battle lasting years. A lawyer is therefore appealing to the federal government – ​​also to protect Switzerland's reputation.

...

Clarifying which party (managers, building inspectors, architect, builder) is really guilty after the club fire that killed 40 people could take years.

...

“It can often take years before they receive compensation, and even then, it can be very unsatisfactory.”

...

... prevent a repeat of the Kaprun fire disaster in November 2000. The tunnel fire, which claimed 155 lives, was followed by years of legal proceedings. "In the end, the victims were frustrated because they felt the process was unfair," Steinegger explains.

...

People who are disabled after the disaster and, for example, can no longer work, face enormous follow-up costs. "These costs are simply incalculable," says Steinegger.

The inspectors, architect and suppliers of insulation will all try to cover themselves and point blame at each other. This will frustrate victims.

This web site was originally founded when the insurance regulator themselves failed. Look at the example of the email from Joao Rodrigues Boleiro. In the email, written in French, Rodrigues Boleiro is telling me that he is not responsible and it is all the fault of another partner in the firm: Mathieu Parreaux, the founder. In the last post that Parreaux himself published on LinkedIn, Parreaux told us that FINMA is responsible.

Look at Grenfell Tower and the insulation company. Internal documents show that the manufacturer, Kingspan, knew about fire risk and promoted the product in places where it shouldn't be used.

Yet in Canton Valais, they weren't even sure how many victims they have. It took a week for all victims to be counted correctly before they can even begin thinking about legal action. First they told us there were 113 victims, then there were 119 victims and now only 116 victims (also in BBC report).

In the German language, we use the same word, Schuldige, for somebody who is guilty of a crime and Schuldner for somebody who has a big debt for medical expenses.

The owners of the bar, who are suspects have been given unconditional bail and are free to come and go as they please while the investigation is underway, even while some of the victims may be involuntarily confined to hospital for months.

When somebody fell on Carla in Zurich, the Swiss authorities spent two years protecting the yoga studio and blaming Carla and I.

I hope the victims of the fire will not spend the next two years in arguments about insurance. The jurist Rolf Steinegger suggests it may be much longer than that.

While victims wait for the money questions to be resolved, they may have a black mark on their credit record. This is called a Betreibung in German or a poursuite in French. The same word is also used to describe stalking. Think of these black marks on the credit record as analogous to the life-long scars on the victims' faces.

The Holocaust kicked off with the Kristallnacht, or night of broken glass, on 9 November 1938. Incidentally, one of the most violent attacks against a young woman in Australia took place on 9 November 2005. Lauren Huxley was beaten with a wrench and then set on fire.

At the end of his sentence, the man responsible was released from prison in 2025. The media were keen to interview Lauren and publish before-and-after photos.

Lauren Huxley, 2005, 2025

 

Technically, on paper, Jacques Moretti and Jessica Maric may be responsible for the tragedy because they own the bar. In practice, most bar owners are not experts on the use of fire-resistant construction practices and they depend on the suppliers of construction materials, the local municipality and the building inspectors to provide specialist insights into fire-resistant design.

Please see the rest of the JuristGate reports.

08 January, 2026 07:30AM

January 07, 2026

hackergotchi for Gunnar Wolf

Gunnar Wolf

Artificial Intelligence • Play or break the deck

This post is an unpublished review for Artificial Intelligence • Play or break the deck

As a little disclaimer, I usually review books or articles written in English, and although I will offer this review to Computing Reviews as usual, it is likely it will not be published. The title of this book in Spanish is Inteligencia artificial: jugar o romper la baraja.

I was pointed at this book, published last October by Margarita Padilla García, a well known Free Software activist from Spain who has long worked on analyzing (and shaping) aspects of socio-technological change. As other books published by Traficantes de sueños, this book is published as Open Access, under a CC BY-NC license, and can be downloaded in full. I started casually looking at this book, with too long a backlog of material to read, but soon realized I could just not put it down: it completely captured me.

This book presents several aspects of Artificial Intelligence (AI), written for a general, non-technical audience. Many books with a similar target have been published, but this one is quite unique; first of all, it is written in a personal, non-formal tone. Contrary to what’s usual in my reading, the author made the explicit decision not to fill the book with references to her sources (“because searching on Internet, it’s very easy to find things”), making the book easier to read linearly — a decision I somewhat regret, but recognize helps develop the author’s style.

The book has seven sections, dealing with different aspects of AI. They are the “Visions” (historical framing of the development of AI); “Spectacular” (why do we feel AI to be so disrupting, digging particularly into game engines and search space); “Strategies”, explaining how multilayer neural networks work and linking the various branches of historic AI together, arriving at Natural Language Processing; “On the inside”, tackling technical details such as algorithms, the importance of training data, bias, discrimination; “On the outside”, presenting several example AI implementations with socio-ethical implications; “Philosophy”, presenting the works of Marx, Heidegger and Simondon in their relation with AI, work, justice, ownership; and “Doing”, presenting aspects of social activism in relation to AI. Each part ends with yet another personal note: Margarita Padilla includes a letter to one of her friends related to said part.

Totalling 272 pages (A5, or roughly half-letter, format), this is a rather small book. I read it probably over a week. So, while this book does not provide lots of new information to me, the way how it was written, made it a very pleasing experience, and it will surely influence the way I understand or explain several concepts in this domain.

07 January, 2026 07:46PM

Thorsten Alteholz

My Debian Activities in December 2025

Debian LTS/ELTS

This was my hundred-thirty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. (As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.)

During my allocated time I uploaded or worked on:

  • [cups] upload to unstable to fix an issue with the latest security upload
  • [libcoap3] uploaded to unstable to fix ten CVEs
  • [gcal] check whether security bug reports are really security bug reports (no, they are not and no CVEs have been issued yet)
  • [#1124284] trixie-pu for libcoap3 to fix ten CVEs in Trixie.
  • [#1121342] trixie-pu bug; debdiff has been approved and libcupsfilters uploaded.
  • [#1121391] trixie-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121392] bookworm-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121433] trixie-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1121437] bookworm-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1124284] trixie-pu bug; debdiff has been approved and libcoap3 uploaded.

I also tried to backport the libcoap3-patches to Bookworm, but I am afraid the changes would be too intrusive.

When I stumbled upon a comment for 7zip about “finding the patches might be a hard”, I couldn’t believe it. Well, Daniel was right and I didn’t find any.

Furthermore I worked on suricata, marked some CVEs as not-affected or ignored, and added some new patches. Unfortunately my allocated time was spent before I could do a new upload.

I also attended the monthly LTS/ELTS meeting.

Last but not least I injected some packages for uploads to security-master.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

  • cups to unstable.

This work is generously funded by Freexian!

Debian Lomiri

I started to contribute to Lomiri packages, which are part of the Debian UBports Team. As a first step I took care of failing CI pipelines and tried to fix them. A next step would be to package some new Applications.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded a new upstream version or a bugfix version of:

Last but not least, I wish (almost) everbody a Happy New Year and hope that you are able to stick to your New Year’s resolutions.

07 January, 2026 02:54PM by alteholz

January 06, 2026

Ingo Juergensmann

Outages on Nerdculture.de due to Ceph – Part 2

Last weekend I had “fun” with Ceph again on a Saturday evening. But let’s start at the beginning….

Before the weekend I announced a downtime/maintenance windows to upgrade PostgreSQL from v15 to v17 – because of the Debian upgrade from Bookworm to Trixie. After some tests with a cloned VM I decided use the quick path of pg_ugradecluster 15 main -v 17 -m upgrade –clone. As this would be my first time to upgrade PostgreSQL that way, I made several backups. In the end everything went smooth and the database is now on v17.

However, there was also a new Proxmox kernel and packages, so I also upgrade one Proxmox node and rebootet it. And then the issues began:

But before that I also encountered an issue with Redis for Mastodon. It complained about this:

Unable to obtain the AOF file appendonly.aof.4398.base.rdb

Solution to this was to change redis configuration to autoappend no.

And then CephFS was unavailable again, complaining about laggy MDS or no MDS at all, which – of course – was totally wrong. I search for solutions and read many forum posts in the Proxmox forum, but nothing helped. I also read the official Ceph documentation. After a whole day offline for all of the services to my thousands of users, I somehow managed to get systemctl reset-failed mnt-pve-cephfs && systemctl start mnt-pve-cephfs again. Shortly before that I followed the advice in the Ceph docs for RADOS Health and there especially section about Troubleshooting Monitors.

In the end, I can’t say which step exactly did the trick that CephFS was working again. But as it seems, I will have one or two more chances to find out, because only one server out of three is currently updated.

Another issue during the downtime also was that one server crashed/rebooted and didn’t came back. It hang in the midst of an upgrade at the point of upgrade-grub. Usually it wouldn’t be a big deal: just go the IPMI website and reboot the server.

Nah! That’s too simple!

For some unknow reason the IPMI interfaces lost their DHCP leases: the DHCP server at the colocation was not serving IPs. So I opened a ticket, got some acknowledgement from the support but also a statement “maybe tomorrow or on Monday…”. Hmpf!

On Sunday evening I managed to bring back CephFS. As said: no idea what specific step did the trick. But the story continues: On Monday before lunch time the IPMI DHCP was working again and I could access the web interfaces again, logged in…. and was forcefully locked out again:

Your session has timed out. You will need to open a new session

I hit the problem described here. But cold resetting the BMC didn’t work. So still no working web interface to deal with the issue. But on my phone I got “IPMIView” as app and that still worked and showed the KVM console. But what I saw there didn’t make me happy as well:

The reason for this is apparently the crash while running update-grub. Anyway, using the Grub bootloader and selecting an older kernel works fine. The server boots, Proxmox is showing the node as up and…. the working CephFS is stalled again! Fsck!

Rebooting the node or stopping Ceph on that node results immediatedly in a working CephFS again.

Currently I’m moving everything off of Ceph to the local disks of the two nodes. If everything is on local disks I can work on debugging CephFS without interrupting the service for the users (hopefully). But this also means that there will be no redundancy for Mastodon and mail.

When I have more detailled information about possible reasons and such, I may post to the Proxmox forum.

06 January, 2026 03:57PM by ij

January 05, 2026

hackergotchi for Matthew Garrett

Matthew Garrett

Not here

Hello! I am not posting here any more. You can find me here instead. Most Planets should be updated already (I've an MR open for Planet Gnome), but if you're subscribed to my feed directly please update it.

comment count unavailable comments

05 January, 2026 10:26PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python packaging

I upgraded these packages to new upstream versions:

Python 3.14 is now a supported version in unstable, and we’re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this:

Fixes for pytest 9:

I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject.

I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix).

I fixed or helped to fix several other build/test failures:

Other bugs:

Other bits and pieces

Code reviews

05 January, 2026 01:08PM by Colin Watson

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes Outreachy interns for December 2025-March 2026 round

Outreachy logo

Debian continues participating in Outreachy, and as you might have already noticed, Debian has selected two interns for the Outreachy December 2025 - March 2026 round.

After a busy contribution phase and a competitive selection process, Hellen Chemtai Taylor and Isoken Ibizugbe are officially working as interns on Debian Images Testing with OpenQA for the past month, mentored by Tássia Camões Araújo, Roland Clobus and Philip Hands.

Congratulations and welcome Hellen Chemtai Taylor and Isoken Ibizugbe!

The team also congratulates all candidates for their valuable contributions, with special thanks to those who manage to continue participating as volunteers.

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help to improve Debian! You can follow the work of the Outreachy interns reading their blog posts (syndicated in Planet Debian), and chat with the team at the debian-openqa matrix channel. For Outreachy matters, the programme admins can be reached on #debian-outreach IRC/matrix channel and mailing list.

05 January, 2026 09:00AM by Anupa Ann Joseph, Tássia Camões Araújo

Vincent Bernat

Using eBPF to load-balance traffic across UDP sockets with Go

Akvorado collects sFlow and IPFIX flows over UDP. Because UDP does not retransmit lost packets, it needs to process them quickly. Akvorado runs several workers listening to the same port. The kernel should load-balance received packets fairly between these workers. However, this does not work as expected. A couple of workers exhibit high packet loss:

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 0
packets_total{listener="0.0.0.0:2055",worker="1"} 0
packets_total{listener="0.0.0.0:2055",worker="2"} 0
packets_total{listener="0.0.0.0:2055",worker="3"} 1.614933572278264e+15
packets_total{listener="0.0.0.0:2055",worker="4"} 0
packets_total{listener="0.0.0.0:2055",worker="5"} 0
packets_total{listener="0.0.0.0:2055",worker="6"} 9.59964121598348e+14
packets_total{listener="0.0.0.0:2055",worker="7"} 0

eBPF can help by implementing an alternate balancing algorithm. �

Options for load-balancing

There are three methods to load-balance UDP packets across workers:

  1. One worker receives the packets and dispatches them to the other workers.
  2. All workers share the same socket.
  3. Each worker has its own socket, listening to the same port, with the SO_REUSEPORT socket option.

SO_REUSEPORT option

Tom Hebert added the SO_REUSEPORT socket option in Linux 3.9. The cover letter for his patch series explains why this new option is better than the two existing ones from a performance point of view:

SO_REUSEPORT allows multiple listener sockets to be bound to the same port. […] Received packets are distributed to multiple sockets bound to the same port using a 4-tuple hash.

The motivating case for SO_RESUSEPORT in TCP would be something like a web server binding to port 80 running with multiple threads, where each thread might have it’s own listener socket. This could be done as an alternative to other models:

  1. have one listener thread which dispatches completed connections to workers, or
  2. accept on a single listener socket from multiple threads.

In case #1, the listener thread can easily become the bottleneck with high connection turn-over rate. In case #2, the proportion of connections accepted per thread tends to be uneven under high connection load. […] We have seen the disproportion to be as high as 3:1 ratio between thread accepting most connections and the one accepting the fewest. With SO_REUSEPORT the distribution is uniform.

The motivating case for SO_REUSEPORT in UDP would be something like a DNS server. An alternative would be to receive on the same socket from multiple threads. As in the case of TCP, the load across these threads tends to be disproportionate and we also see a lot of contection on the socket lock.

Akvorado uses the SO_REUSEPORT option to dispatch the packets across the workers. However, because the distribution uses a 4-tuple hash, a single socket handles all the flows from one exporter.

SO_ATTACH_REUSEPORT_EBPF option

In Linux 4.5, Craig Gallek added the SO_ATTACH_REUSEPORT_EBPF option to attach an eBPF program to select the target UDP socket. In Linux 4.6, he extended it to support TCP. The socket(7) manual page documents this mechanism:1

The BPF program must return an index between 0 and N-1 representing the socket which should receive the packet (where N is the number of sockets in the group). If the BPF program returns an invalid index, socket selection will fall back to the plain SO_REUSEPORT mechanism.

In Linux 4.19, Martin KaFai Lau added the BPF_PROG_TYPE_SK_REUSEPORT program type. Such an eBPF program selects the socket from a BPF_MAP_TYPE_REUSEPORT_ARRAY map instead. This new approach is more reliable when switching target sockets from one instance to another—for example, when upgrading, a new instance can add its sockets and remove the old ones.

Load-balancing with eBPF and Go

Altering the load-balancing algorithm for a group of sockets requires two steps:

  1. write and compile an eBPF program in C,2 and
  2. load it and attach it in Go.

eBPF program in C

A simple load-balancing algorithm is to randomly choose the destination socket. The kernel provides the bpf_get_prandom_u32() helper function to get a pseudo-random number.

volatile const __u32 num_sockets; // �

struct {
    __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
    __type(key, __u32);
    __type(value, __u64);
    __uint(max_entries, 256);
} socket_map SEC(".maps"); // �

SEC("sk_reuseport")
int reuseport_balance_prog(struct sk_reuseport_md *reuse_md)
{
    __u32 index = bpf_get_prandom_u32() % num_sockets; // �
    bpf_sk_select_reuseport(reuse_md, &socket_map, &index, 0); // �
    return SK_PASS; // �
}

char _license[] SEC("license") = "GPL";

In �, we declare a volatile constant for the number of sockets in the group. We will initialize this constant before loading the eBPF program into the kernel. In �, we define the socket map. We will populate it with the socket file descriptors. In �, we randomly select the index of the target socket.3 In �, we invoke the bpf_sk_select_reuseport() helper to record our decision. Finally, in �, we accept the packet.

Header files

If you compile the C source with clang, you get errors due to missing headers. The recommended way to solve this is to generate a vmlinux.h file with bpftool:

$ bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

Then, include the following headers:4

#include "vmlinux.h"
#include <bpf/bpf_helpers.h>

For my 6.17 kernel, the generated vmlinux.h is quite big: 2.7 MiB. Moreover, bpf/bpf_helpers.h is shipped with libbpf. This adds another dependency for users. As the eBPF program is quite small, I prefer to put the strict minimum in vmlinux.h by cherry-picking the definitions I need.

Compilation

The eBPF Library for Go ships bpf2go, a tool to compile eBPF programs and to generate some scaffolding code. We create a gen.go file with the following content:

package main

//go:generate go tool bpf2go -tags linux reuseport reuseport_kern.c

After running go generate ./..., we can inspect the resulting objects with readelf and llvm-objdump:

$ readelf -S reuseport_bpfeb.o
There are 14 section headers, starting at offset 0x840:
  [Nr] Name              Type             Address           Offset
[…]
  [ 3] sk_reuseport      PROGBITS         0000000000000000  00000040
  [ 6] .maps             PROGBITS         0000000000000000  000000c8
  [ 7] license           PROGBITS         0000000000000000  000000e8
[…]
$ llvm-objdump -S reuseport_bpfeb.o
reuseport_bpfeb.o:  file format elf64-bpf
Disassembly of section sk_reuseport:
0000000000000000 <reuseport_balance_prog>:
; {
       0:   bf 61 00 00 00 00 00 00     r6 = r1
;     __u32 index = bpf_get_prandom_u32() % num_sockets;
       1:   85 00 00 00 00 00 00 07     call 0x7
[…]

Usage from Go

Let’s set up 10 workers listening to the same port.5 Each socket enables the SO_REUSEPORT option before binding:6

var (
    err error
    fds []uintptr
    conns []*net.UDPConn
)
workers := 10
listenAddr := "127.0.0.1:0"
listenConfig := net.ListenConfig{
    Control: func(_, _ string, c syscall.RawConn) error {
        c.Control(func(fd uintptr) {
            err = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_REUSEPORT, 1)
            fds = append(fds, fd)
        })
        return err
    },
}
for range workers {
    pconn, err := listenConfig.ListenPacket(t.Context(), "udp", listenAddr)
    if err != nil {
        t.Fatalf("ListenPacket() error:\n%+v", err)
    }
    udpConn := pconn.(*net.UDPConn)
    listenAddr = udpConn.LocalAddr().String()
    conns = append(conns, udpConn)
}

The second step is to load the eBPF program, initialize the num_sockets variable, populate the socket map, and attach the program to the first socket.7

// Load the eBPF collection.
spec, err := loadReuseport()
if err != nil {
    t.Fatalf("loadVariables() error:\n%+v", err)
}

// Set "num_sockets" global variable to the number of file descriptors we will register
if err := spec.Variables["num_sockets"].Set(uint32(len(fds))); err != nil {
    t.Fatalf("NumSockets.Set() error:\n%+v", err)
}

// Load the map and the program into the kernel.
var objs reuseportObjects
if err := spec.LoadAndAssign(&objs, nil); err != nil {
    t.Fatalf("loadReuseportObjects() error:\n%+v", err)
}
t.Cleanup(func() { objs.Close() })

// Assign the file descriptors to the socket map.
for worker, fd := range fds {
    if err := objs.reuseportMaps.SocketMap.Put(uint32(worker), uint64(fd)); err != nil {
        t.Fatalf("SocketMap.Put() error:\n%+v", err)
    }
}

// Attach the eBPF program to the first socket.
socketFD := int(fds[0])
progFD := objs.reuseportPrograms.ReuseportBalanceProg.FD()
if err := unix.SetsockoptInt(socketFD, unix.SOL_SOCKET, unix.SO_ATTACH_REUSEPORT_EBPF, progFD); err != nil {
    t.Fatalf("SetsockoptInt() error:\n%+v", err)
}

We are now ready to process incoming packets. Each worker is a Go routine incrementing a counter for each received packet:8

var wg sync.WaitGroup
receivedPackets := make([]int, workers)
for worker := range workers {
    conn := conns[worker]
    packets := &receivedPackets[worker]
    wg.Go(func() {
        payload := make([]byte, 9000)
        for {
            if _, err := conn.Read(payload); err != nil {
                if errors.Is(err, net.ErrClosed) {
                    return
                }
                t.Logf("Read() error:\n%+v", err)
            }
            *packets++
        }
    })
}

Let’s send 1000 packets:

sentPackets := 1000
conn, err := net.Dial("udp", conns[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
for range sentPackets {
    if _, err := conn.Write([]byte("hello world!")); err != nil {
        t.Fatalf("Write() error:\n%+v", err)
    }
}

If we print the content of the receivedPackets array, we can check the balancing works as expected, with each worker getting about 100 packets:

=== RUN   TestUDPWorkerBalancing
    balancing_test.go:84: receivedPackets[0] = 107
    balancing_test.go:84: receivedPackets[1] = 92
    balancing_test.go:84: receivedPackets[2] = 99
    balancing_test.go:84: receivedPackets[3] = 105
    balancing_test.go:84: receivedPackets[4] = 107
    balancing_test.go:84: receivedPackets[5] = 96
    balancing_test.go:84: receivedPackets[6] = 102
    balancing_test.go:84: receivedPackets[7] = 105
    balancing_test.go:84: receivedPackets[8] = 99
    balancing_test.go:84: receivedPackets[9] = 88

    balancing_test.go:91: receivedPackets = 1000
    balancing_test.go:92: sentPackets     = 1000

Graceful restart

You can also use SO_ATTACH_REUSEPORT_EBPF to gracefully restart an application. A new instance of the application binds to the same address and prepare its own version of the socket map. Once it attaches the eBPF program to the first socket, the kernel steers incoming packets to this new instance. The old instance needs to drain the already received packets before shutting down.

To check we are not losing any packet, we spawn a Go routine to send as many packets as possible:

sentPackets := 0
notSentPackets := 0
done := make(chan bool)
conn, err := net.Dial("udp", conns1[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
go func() {
    for {
        if _, err := conn.Write([]byte("hello world!")); err != nil {
            notSentPackets++
        } else {
            sentPackets++
        }
        select {
        case <-done:
            return
        default:
        }
    }
}()

Then, while the Go routine runs, we start the second set of workers. Once they are running, they start receiving packets. If we gracefully stop the initial set of workers, not a single packet is lost!9

=== RUN   TestGracefulRestart
    graceful_test.go:135: receivedPackets1[0] = 165
    graceful_test.go:135: receivedPackets1[1] = 195
    graceful_test.go:135: receivedPackets1[2] = 194
    graceful_test.go:135: receivedPackets1[3] = 190
    graceful_test.go:135: receivedPackets1[4] = 213
    graceful_test.go:135: receivedPackets1[5] = 187
    graceful_test.go:135: receivedPackets1[6] = 170
    graceful_test.go:135: receivedPackets1[7] = 190
    graceful_test.go:135: receivedPackets1[8] = 194
    graceful_test.go:135: receivedPackets1[9] = 155

    graceful_test.go:139: receivedPackets2[0] = 1631
    graceful_test.go:139: receivedPackets2[1] = 1582
    graceful_test.go:139: receivedPackets2[2] = 1594
    graceful_test.go:139: receivedPackets2[3] = 1611
    graceful_test.go:139: receivedPackets2[4] = 1571
    graceful_test.go:139: receivedPackets2[5] = 1660
    graceful_test.go:139: receivedPackets2[6] = 1587
    graceful_test.go:139: receivedPackets2[7] = 1605
    graceful_test.go:139: receivedPackets2[8] = 1631
    graceful_test.go:139: receivedPackets2[9] = 1689

    graceful_test.go:147: receivedPackets = 18014
    graceful_test.go:148: sentPackets     = 18014

Unfortunately, gracefully shutting down a UDP socket is not trivial in Go.10 Previously, we were terminating workers by closing their sockets. However, if we close them too soon, the application loses packets that were assigned to them but not yet processed. Before stopping, a worker needs to call conn.Read() until there are no more packets. A solution is to set a deadline for conn.Read() and check if we should stop the Go routine when the deadline is exceeded:

payload := make([]byte, 9000)
for {
    conn.SetReadDeadline(time.Now().Add(50 * time.Millisecond))
    if _, err := conn.Read(payload); err != nil {
        if errors.Is(err, os.ErrDeadlineExceeded) {
            select {
            case <-done:
                return
            default:
                continue
            }
        }
        t.Logf("Read() error:\n%+v", err)
    }
    *packets++
}

With TCP, this aspect is simpler: after enabling the net.ipv4.tcp_migrate_req sysctl, the kernel automatically migrates waiting connections to a random socket in the same group. Alternatively, eBPF can also control this migration. Both features are available since Linux 5.14.

Addendum

After implementing this strategy in Akvorado, all workers now drop packets! 😱

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 838673
packets_total{listener="0.0.0.0:2055",worker="1"} 843675
packets_total{listener="0.0.0.0:2055",worker="2"} 837922
packets_total{listener="0.0.0.0:2055",worker="3"} 841443
packets_total{listener="0.0.0.0:2055",worker="4"} 840668
packets_total{listener="0.0.0.0:2055",worker="5"} 850274
packets_total{listener="0.0.0.0:2055",worker="6"} 835488
packets_total{listener="0.0.0.0:2055",worker="7"} 834479

The root cause is the default limit of 32 records for Kafka batch sizes. This limit is too low because the brokers have a large overhead when handling each batch: they need to ensure they persist correctly before acknowledging them. Increasing the limit to 4096 records fixes this issue.

While load-balancing incoming flows with eBPF remains useful, it did not solve the main issue. At least the even distribution of dropped packets helped identify the real bottleneck. 😅


  1. The current version of the manual page is incomplete and does not cover the evolution introduced in Linux 4.19. There is a pending patch about this. ↩�

  2. Rust is another option. However, the program we use is so trivial that it does not make sense to use Rust. ↩�

  3. As bpf_get_prandom_u32() returns a pseudo-random 32-bit unsigned value, this method exhibits a very slight bias towards the first indexes. This is unlikely to be worth fixing. ↩�

  4. Some examples include <linux/bpf.h> instead of "vmlinux.h". This makes your eBPF program dependent on the installed kernel headers. ↩�

  5. listenAddr is initially set to 127.0.0.1:0 to allocate a random port. After the first iteration, it is updated with the allocated port. ↩�

  6. This is the setupSockets() function in fixtures_test.go. ↩�

  7. This is the setupEBPF() function in fixtures_test.go. ↩�

  8. The complete code is in balancing_test.go ↩�

  9. The complete code is in graceful_test.go ↩�

  10. In C, we would poll() both the socket and a pipe used to signal for shutdown. When the second condition is triggered, we drain the socket by executing a series of non-blocking read() until we get EWOULDBLOCK. ↩�

05 January, 2026 08:51AM by Vincent Bernat

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2025

Given we’ve entered a new year it’s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.

Conferences

My first conference of the year was FOSDEM. I’d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more “this is how we do it” rather than “here’s some neat Free Software that does it”. I’m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that’s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.

I’d more luck at DebConf25 where I gave a talk (Don’t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn’t seen in ages, and generally feeling a bit more invigorated about Debian.

Other conferences I considered, but couldn’t justify, were All Systems Go! and the Linux Plumbers Conference. I’ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.

I’m going to have to miss FOSDEM this year, due to travel later in the month, and I’m uncertain if I’m going to make DebConf (for a variety of reasons). That means I don’t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?

Debian

I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.

sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There’s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + Jürgen Spitzmüller from the LyX upstream for figuring out the issue + a fix.

Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.

I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.

While I don’t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.

libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.

Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There’s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.

I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.

Recognising the fact I wasn’t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas’ call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we’re not actually managing to do, to avoid the perception it’s all fine and no one else needs to step up. Took me too long to act on it.

The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.

Linux

TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Some were just cleanups, but several fixed real issues that were causing us issues. I’ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what’s going on there.

Personal projects

I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I’ve got a set of changes that should add RFC9580 (v6) support, but there’s not a lot of test keys out there at present for making sure I’m handling things properly. Equally there’s a plan to remove Berkeley DB from Debian, which I’m completely down with, but that means I need a new primary backend. I’ve got a draft of LMDB support to replace that, but I need to go back and confirm I’ve got all the important bits implemented before publishing it and committing to a DB layout. I’d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.

(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I’m aware of offers.)

That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I’m involved. I highly doubt 2026 will be much different, but I think that’s ok. I scratch my own itches, and if that helps out other folk too then that’s lovely, but not the primary goal.

05 January, 2026 07:57AM

Russell Coker

Phone Charging Speeds With Debian/Trixie

One of the problems I encountered with the PinePhone Pro (PPP) when I tried using it as a daily driver [1] was the charge speed, both slow charging and a bad ratio of charge speed to discharge speed. I also tried using a One Plus 6 (OP6) which had a better charge speed and battery life but I never got VoLTE to work [2] and VoLTE is a requirement for use in Australia and an increasing number of other countries. In my tests with the Librem 5 from Purism I had similar issues with charge speed [3].

What I want to do is get an acceptable ratio of charge time to use time for a free software phone. I don’t necessarily object to a phone that can’t last an 8 hour day on a charge, but I can’t use a phone that needs to be on charge for 4 hours during the day. For this part I’m testing the charge speed and will test the discharge speed when I have solved some issues with excessive CPU use.

I tested with a cheap USB power monitoring device that is inline between the power cable and the phone. The device has no method of export so I just watched it and when the numbers fluctuated I tried to estimate the average. I only give the results to two significant digits which is about all the accuracy that is available, as I copied the numbers separately the V*A might not exactly equal the W. I idly considered rounding off Voltages to the nearest Volt and current to the half amp but the way the PC USB ports have voltage drop at higher currents is interesting.

This post should be useful for people who want to try out FOSS phones but don’t want to buy the range of phones and chargers that I have bought.

Phones Tested

I have seen claims about improvements with charging speed on the Librem 5 with recent updates so I decided to compare a number of phones running Debian/Trixie as well as some Android phones. I’m comparing an old Samsung phone (which I tried running Droidian on but is now on Android) and a couple of Pixel phones with the three phones that I currently have running Debian for charging.

Chargers Tested

HP Z640

The Librem 5 had problems with charging on a port on the HP ML110 Gen9 I was using as a workstation. I have sold the ML110 and can’t repeat that exact test but I tested on the HP z640 that I use now. The z640 is a much better workstation (quieter and better support for audio and other desktop features) and is also sold as a workstation.

The z640 documentation says that of the front USB ports the top one can do “fast charge (up to 1.5A)” with “USB Battery Charging Specification 1.2”. The only phone that would draw 1.5A on that port was the Librem 5 but the computer would only supply 4.4V at that current which is poor. For every phone I tested the bottom port on the front (which apparently doesn’t have USB-BC or USB-PD) charged at least as fast as the top port and every phone other than the OP6 charged faster on the bottom port. The Librem 5 also had the fastest charge rate on the bottom port. So the rumours about the Librem 5 being updated to address the charge speed on PC ports seem to be correct.

The Wikipedia page about USB Hardware says that the only way to get more than 1.5A from a USB port while operating within specifications is via USB-PD so as USB 3.0 ports the bottom 3 ports should be limited to 5V at 0.9A for 4.5W. The Librem 5 takes 2.0A and the voltage drops to 4.6V so that gives 9.2W. This shows that the z640 doesn’t correctly limit power output and the Librem 5 will also take considerably more power than the specs allow. It would be really interesting to get a powerful PSU and see how much power a Librem 5 will take without negotiating USB-PD and it would also be interesting to see what happens when you short circuit a USB port in a HP z640. But I recommend not doing such tests on hardware you plan to keep using!

Of the phones I tested the only one that was within specifications on the bottom port of the z640 was the OP6. I think that is more about it just charging slowly in every test than conforming to specs.

Monitor

The next test target is my 5120*2160 Kogan monitor with a USB-C port [4]. This worked quite well and apart from being a few percent slower on the PPP it outperformed the PC ports for every device due to using USB-PD (the only way to get more than 5V) and due to just having a more powerful PSU that doesn’t have a voltage drop when more than 1A is drawn.

Ali Charger

The Ali Charger is a device that I bought from AliExpress is a 240W GaN charger supporting multiple USB-PD devices. I tested with the top USB-C port that can supply 100W to laptops.

The Librem 5 has charging going off repeatedly on the Ali charger and doesn’t charge properly. It’s also the only charger for which the Librem 5 requests a higher voltage than 5V, so it seems that the Librem 5 has some issues with USB-PD. It would be interesting to know why this problem happens, but I expect that a USB signal debugger is needed to find that out. On AliExpress USB 2.0 sniffers go for about $50 each and with a quick search I couldn’t see a USB 3.x or USB-C sniffer. So I’m not going to spend my own money on a sniffer, but if anyone in Melbourne Australia owns a sniffer and wants to visit me and try it out then let me know. I’ll also bring it to Everything Open 2026.

Generally the Ali charger was about the best charger from my collection apart from the case of the Librem 5.

Dell Dock

I got a number of free Dell WD15 (aka K17A) USB-C powered docks as they are obsolete. They have VGA ports among other connections and for the HDMI and DisplayPort ports it doesn’t support resolutions higher than FullHD if both ports are in use or 4K if a single port is in use. The resolutions aren’t directly relevant to the charging but it does indicate the age of the design.

The Dell dock seems to not support any voltages other than 5V for phones and 19V (20V requested) for laptops. Certainly not the 9V requested by the Pixel 7 Pro and Pixel 8 phones. I wonder if not supporting most fast charging speeds for phones was part of the reason why other people didn’t want those docks and I got some for free. I hope that the newer Dell docks support 9V, a phone running Samsung Dex will display 4K output on a Dell dock and can productively use a keyboard and mouse. Getting equivalent functionality to Dex working properly on Debian phones is something I’m interested in.

Battery

The “Battery” I tested with is a Chinese battery for charging phones and laptops, it’s allegedly capable of 67W USB-PD supply but so far all I’ve seen it supply is 20V 2.5A for my laptop. I bought the 67W battery just in case I need it for other laptops in future, the Thinkpad X1 Carbon I’m using now will charge from a 30W battery.

There seems to be an overall trend of the most shonky devices giving the best charging speeds. Dell and HP make quality gear although my tests show that some HP ports exceed specs. Kogan doesn’t make monitors, they just put their brand on something cheap. Buying one of the cheapest chargers from AliExpress and one of the cheaper batteries from China I don’t expect the highest quality and I am slightly relieved to have done enough tests with both of those that a fire now seems extremely unlikely. But it seems that the battery is one of the fastest charging devices I own and with the exception of the Librem 5 (which charges slowly on all ports and unreliably on several ports) the Ali charger is also one of the fastest ones. The Kogan monitor isn’t far behind.

Conclusion

Voltage and Age

The Samsung Galaxy Note 9 was released in 2018 as was the OP6. The PPP was first released in 2022 and the Librem 5 was first released in 2020, but I think they are both at a similar technology level to the Note 9 and OP6 as the companies that specialise in phones have a pipeline for bringing new features to market.

The Pixel phones are newer and support USB-PD voltage selection while the other phones either don’t support USB-PD or support it but only want 5V. Apart from the Librem 5 which wants a higher voltage but runs it at a low current and repeatedly disconnects.

Idle Power

One of the major problems I had in the past which prevented me from using a Debian phone as my daily driver is the ratio of idle power use to charging power. Now that the phones seem to charge faster if I can get the idle power use under control then it will be usable.

Currently the Librem 5 running Trixie is using 6% CPU time (24% of a core) while idle and the screen is off (but “Caffeine” mode is enabled so no deep sleep). On the PPP the CPU use varies from about 2% and 20% (12% to 120% of one core), this was mainly plasmashell and kwin_wayland. The OP6 has idle CPU use a bit under 1% CPU time which means a bit under 8% of one core.

The Librem 5 and PPP seem to have configuration issues with KDE Mobile and Pipewire that result in needless CPU use. With those issues addressed I might be able to make a Librem 5 or PPP a usable phone if I have a battery to charge it.

The OP6 is an interesting point of comparison as a Debian phone but is not a viable option as a daily driver due to problems with VoLTE and also some instability – it sometimes crashes or drops off Wifi.

The Librem 5 charges at 9.2W from a a PC that doesn’t obey specs and 10W from a battery. That’s a reasonable charge rate and the fact that it can request 12V (unsuccessfully) opens the possibility to potential higher charge rates in future. That could allow a reasonable ratio of charge time to use time.

The PPP has lower charging speeds then the Librem 5 but works more consistently as there was no charger I found that wouldn’t work well with it. This is useful for the common case of charging from a random device in the office. But the fact that the Librem 5 takes 10W from the battery while the PPP only takes 6.3W would be an issue if using the phone while charging.

Now I know the charge rates for different scenarios I can work on getting the phones to use significantly less power than that on average.

Specifics for a Usable Phone

The 57W battery or something equivalent is something I think I will always need to have around when using a PPP or Librem 5 as a daily driver.

The ability to charge fast while at a desk is also an important criteria. The charge speed of my home PC is good in that regard and the charge speed of my monitor is even better. Getting something equivalent at a desktop of an office I work in is a possibility.

Improving the Debian distribution for phones is necessary. That’s something I plan to work on although the code is complex and in many cases I’ll have to just file upstream bug reports.

I have also ordered a FuriLabs FLX1s [5] which I believe will be better in some ways. I will blog about it when it arrives.

Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W

05 January, 2026 07:21AM by etbe

January 03, 2026

Joerg Jaspert

AI Shit, go away; iocaine to the rescue

As a lot of people do, I have some content that is reachable using webbrowsers. There is the password manager Vaultwarden, an instance of Immich, ForgeJo for some personal git repos, my blog and some other random pages here and there.

All of this never had been a problem, running a webserver is a relatively simple task, no matter if you use apache2 , nginx or any of the other possibilities. And the things mentioned above bring their own daemon to serve the users.

AI crap

And then some idiot somewhere had the idea to ignore every law, every copyright and every normal behaviour and run some shit AI bot. And more idiots followed. And now we have more AI bots than humans generating traffic.

And those AI shit crawlers do not respect any limits. robots.txt, slow servers, anything to keep your meager little site up and alive? Them idiots throw more resources onto them to steal content. No sense at all.

iocaine to the rescue

So them AI bros want to ignore everything and just fetch the whole internet? Without any consideration if thats even wanted? Or legal? There are people who dislike this. I am one of them, but there are some who got annoyed enough to develop tools to fight the AI craziness. One of those tools is iocaine - it says about itself that it is The deadliest poison known to AI.

Feed AI bots sh*t

So you want content? You do not accept any Go away? Then here is content. It is crap, but appearently you don’t care. So have fun.

What iocaine does is (cite from their webpage) “not made for making the Crawlers go away. It is an aggressive defense mechanism that tries its best to take the blunt of the assault, serve them garbage, and keep them off of upstream resources”.

That is, instead of the expensive webapp using a lot of resources that are basically wasted for nothing, iocaine generates a small static page (with some links back to itself, so the crawler shit stays happy). Which takes a hell of a lot less resource than any fullblown app.

iocaine setup

The website has a https://iocaine.madhouse-project.org/documentation/, it is not hard to setup. Still, I had to adjust some things for my setup, as I use [Caddy Docker Proxy}([https://github.com/lucaslorentz/caddy-docker-proxy) nowadays and wanted to keep the config within the docker setup, that is, within the labels.

Caddy container

So my container setup for the caddy itself contains the following extra lines:

    labels:
      caddy_0.email: email@example.com
      caddy_1: (iocaine)
      caddy_1.0_@read: method GET HEAD
      caddy_1.1_reverse_proxy: "@read iocaine:42069"
      "caddy_1.1_reverse_proxy.@fallback": "status 421"
      caddy_1.1_reverse_proxy.handle_response: "@fallback"

This will be translated to the following Caddy config snippet:

(iocaine) {
        @read method GET HEAD
        reverse_proxy @read iocaine:42069 {
                @fallback status 421
                handle_response @fallback
        }
}

Any container that should be protected by iocaine

All the containers that are “behind” the Caddy reverse proxy can now get protected by iocaine with just one more line in their docker-compose.yaml. So now we have

   labels:
      caddy: service.example.com
      caddy.reverse_proxy: "{{upstreams 3000}}"
      caddy.import: iocaine

which translates to

service.example.com {
        import iocaine
        reverse_proxy 172.18.0.6:3000
}

So with one simple extra label for the docker container I have iocaine activated.

Result? ByeBye (most) AI Bots

Looking at the services that got hammered most from those crap bots - deploying this iocaine container and telling Caddy about it solved the problem for me. 98% of the requests from the bots now go to iocaine and no longer hog resources in the actual services.

I wish it wouldn’t be neccessary to run such tools. But as long as we have shitheads doing the AI hype there is no hope. I wish they all would end up in Jail for all their various stealing they do. And someone with a little more brain left would set things up sensibly, then the AI thing could maybe turn out something good and useful.

But currently it is all crap.

03 January, 2026 01:23PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This particular post is closely based on a previously published post by Nate TeBlunthuis from the Community Data Science Blog.

Many online platforms are adopting AI and machine learning as a tool to maintain order and high-quality information in the face of massive influxes of user-generated content. Of course, AI algorithms can be inaccurate, biased, or unfair. How do signals from AI predictions shape the fairness of online content moderation? How can we measure an algorithmic flagging system’s effects?

In our paper published at CSCW, Nate TeBlunthuis, together with myself and Aaron Halfaker, analyzed the RCFilters system: an add-on to Wikipedia that highlights and filters edits that a machine learning algorithm called ORES identifies as likely to be damaging to Wikipedia. This system has been deployed on large Wikipedia language editions and is similar to other algorithmic flagging systems that are becoming increasingly widespread. Our work measures the causal effect of being flagged in the RCFilters user interface.

Screenshot of Wikipedia edit metadata on Special:RecentChanges with RCFilters enabled. Highlighted edits with a colored circle to the left side of other metadata are flagged by ORES. Different circle and highlight colors (white, yellow, orange, and red in the figure) correspond to different levels of confidence that the edit is damaging. RCFilters does not specifically flag edits by new accounts or unregistered editors, but does support filtering changes by editor types.

Our work takes advantage of the fact that RCFilters, like many algorithmic flagging systems, create discontinuities in the relationship between the probability that a moderator should take action and whether a moderator actually does. This happens because the output of machine learning systems like ORES is typically a continuous score (in RCFilters, an estimated probability that a Wikipedia edit is damaging), while the flags (in RCFilters, the yellow, orange, or red highlights) are either on or off and are triggered when the score crosses some arbitrary threshold. As a result, edits slightly above the threshold are both more visible to moderators and appear more likely to be damaging than edits slightly below. Even though edits on either side of the threshold have virtually the same likelihood of truly being damaging, the flagged edits are substantially more likely to be reverted. This fact lets us use a method called regression discontinuity to make causal estimates of the effect of being flagged in RCFilters.

Charts showing the probability that an edit will be reverted as a function of ORES scores in the neighborhood of the discontinuous threshold that triggers the RCfilters flag. The jump in the increase in reversion chances is larger for registered editors compared to unregistered editors at both thresholds.

To understand how this system may affect the fairness of Wikipedia moderation, we estimate the effects of flagging on edits on different groups of editors. Comparing the magnitude of these estimates lets us measure how flagging is associated with several different definitions of fairness. Surprisingly, we found evidence that these flags improved fairness for categories of editors that have been widely perceived as troublesome—particularly unregistered (anonymous) editors. This occurred because flagging has a much stronger effect on edits by the registered than on edits by the unregistered.

We believe that our results are driven by the fact that algorithmic flags are especially helpful for finding damage that can’t be easily detected otherwise. Wikipedia moderators can see the editor’s registration status in the recent changes, watchlists, and edit history. Because unregistered editors are often troublesome, Wikipedia moderators’ attention is often focused on their contributions, with or without algorithmic flags. Algorithmic flags make damage by registered editors (in addition to unregistered editors) much more detectable to moderators and so help moderators focus on damage overall, not just damage by suspicious editors. As a result, the algorithmic flagging system decreases the bias that moderators have against unregistered editors.

This finding is particularly surprising because the ORES algorithm we analyzed was itself demonstrably biased against unregistered editors (i.e., the algorithm tended to greatly overestimate the probability that edits by these editors were damaging). Despite the fact that the algorithms were biased, their introduction could still lead to less biased outcomes overall.

Our work shows that although it is important to design predictive algorithms to avoid such biases, it is equally important to study fairness at the level of the broader sociotechnical system. Since we first published a preprint of our paper, a follow-up piece by Leijie Wang and Haiyi Zhu replicated much of our work and showed that differences between different Wikipedia communities may be another important factor driving the effect of the system. Overall, this work suggests that social signals and social context can interact with algorithmic signals, and together these can influence behavior in important and unexpected ways.


The full citation for the paper is: TeBlunthuis, Nathan, Benjamin Mako Hill, and Aaron Halfaker. 2021. “Effects of Algorithmic Flagging on Fairness: Quasi-Experimental Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW): 56:1-56:27. https://doi.org/10.1145/3449130.

We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.

03 January, 2026 12:34PM by Benjamin Mako Hill

Russ Allbery

Review: Challenges of the Deeps

Review: Challenges of the Deeps, by Ryk E. Spoor

Series: Arenaverse #3
Publisher: Baen
Copyright: March 2017
ISBN: 1-62579-564-5
Format: Kindle
Pages: 438

Challenges of the Deeps is the third book in the throwback space opera Arenaverse series. It is a direct sequel to Spheres of Influence, but Spoor provides a substantial recap of the previous volumes for those who did not read the series in close succession (thank you!).

Ariane has stabilized humanity's position in the Arena with yet another improbable victory. (If this is a spoiler for previous volumes, so was telling you the genre of the book.) Now is a good opportunity to fulfill the promise humanity made to their ally Orphan: accompaniment on a journey into the uncharted deeps of the Arena for reasons that Orphan refuses to explain in advance. Her experienced crew provide multiple options to serve as acting Leader of Humanity until she gets back. What can go wrong?

The conceit of this series is that as soon as a species achieves warp drive technology, their ships are instead transported into the vast extradimensional structure of the Arena where a godlike entity controls the laws of nature and enforces a formal conflict resolution process that looks alternatingly like a sporting event, a dueling code, and technology-capped total war. Each inhabitable system in the real universe seems to correspond to an Arena sphere, but the space between them is breathable atmosphere filled with often-massive storms.

In other words, this is an airship adventure as written by E.E. "Doc" Smith. Sort of. There is an adventure, and there are a lot of airships (although they fight mostly like spaceships), but much of the action involves tense mental and physical sparring with a previously unknown Arena power with unclear motives.

My general experience with this series is that I find the Arena concept fascinating and want to read more about it, Spoor finds his much-less-original Hyperion Project in the backstory of the characters more fascinating and wants to write about that, and we reach a sort of indirect, grumbling (on my part) truce where I eagerly wait for more revelations about the Arena and roll my eyes at the Hyperion stuff. Talking about Hyperion in detail is probably a spoiler for at least the first book, but I will say that it's an excuse to embed versions of literary characters into the story and works about as well as most such excuses (not very). The characters in question are an E.E. "Doc" Smith mash-up, a Monkey King mash-up, and a number of other characters that are obviously references to something but for whom I lack enough hints to place (which is frustrating).

Thankfully we get far less human politics and a decent amount of Arena world-building in this installment. Hyperion plays a role, but mostly as foreshadowing for the next volume and the cause of a surprising interaction with Arena rules. One of the interesting wrinkles of this series is that humanity have an odd edge against the other civilizations in part because we're borderline insane sociopaths from the perspective of the established powers. That's an old science fiction trope, but I prefer it to the Campbell-style belief in inherent human superiority.

Old science fiction tropes are what you need to be in the mood for to enjoy this series. This is an unapologetic and intentional throwback to early pulp: individuals who can be trusted with the entire future of humanity because they're just that moral, super-science, psychic warfare, and even coruscating beams that would make E.E. "Doc" Smith proud. It's an occasionally glorious but mostly silly pile of technobabble, but Spoor takes advantage of the weird, constructed nature of the Arena to provide more complex rules than competitive superlatives.

The trick is that while this is certainly science fiction pulp, it's also a sort of isekai novel. There's a lot of anime and manga influence just beneath the surface. I'm not sure why it never occurred to me before reading this series that melodramatic anime and old SF pulps have substantial aesthetic overlap, but of course they do. I loved the Star Blazers translated anime that I watched as a kid precisely because it had the sort of dramatic set pieces that make the Lensman novels so much fun.

There is a bit too much Wu Kong in this book for me (although the character is growing on me a little), and some of the maneuvering around the mysterious new Arena actor drags on longer than was ideal, but the climax is great stuff if you're in the mood for dramatic pulp adventure. The politics do not bear close examination and the writing is serviceable at best, but something about this series is just fun. I liked this book much better than Spheres of Influence, although I wish Spoor would stop being so coy about the nature of the Arena and give us more substantial revelations. I'm also now tempted to re-read Lensman, which is probably a horrible idea. (Spoor leaves the sexism out of his modern pulp.)

If you got through Spheres of Influence with your curiosity about the Arena intact, consider this one when you're in the mood for modern pulp, although don't expect any huge revelations. It's not the best-written book, but it sits squarely in the center of a genre and mood that's otherwise a bit hard to find.

Followed by the Kickstarter-funded Shadows of Hyperion, which sadly looks like it's going to concentrate on the Hyperion Project again. I will probably pick that up... eventually.

Rating: 6 out of 10

03 January, 2026 05:23AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2025 — A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to.

Albums

In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite.

This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them.

Concerts

In 2025, I went to the following 25 (!!) concerts:

  • January 17th: Uzu, Young Blades, She came to quit, Fever Visions
  • February 1st: Over the Hill, Jail, Mortier, Ain't Right
  • February 7th: Béton Armé, Mulchulation II, Ooz
  • February 15th: The Prowlers, Ultra Razzia, Sistema de Muerte, Trauma Bond
  • February 28th: Francbâtards
  • March 28th: Conflit Majeur, to Even Exist, Crachat
  • April 12th: Jetsam, Mortier, NIIVI, Canette
  • April 26th-27th (Montreal Oi! Fest 2025): The Buzzers, Bad Terms, Sons of Pride, Liberty and Justice, Flafoot 56, The Beltones, Mortier, Street Code, The Stress, Alternate Action
  • May 1st: Bauxite, Atomic threat, the 351's
  • May 30th: Uzu, Tenaz, Extraña Humana, Sistema de muerte
  • June 7th: Ordures Ioniques, Tulaviok, Fucking Raymonds, Voyou
  • June 18th: Tiken Jah Fakoly
  • June 21st: Saïan Supa Celebration
  • June 26th: Taxi Girls, Death Proof, Laura Krieg
  • July 4th: Frente Cumbiero
  • July 12th: Montreal's Big Fiesta DJ Set
  • August 16th: Guerilla Poubelle
  • September 11th: No Suicide Act, Mortier
  • September 20th: Hors Contrôle, Union Thugs, Barricade Mentale
  • October 20th: Ezra Furman, The Golden Dregs
  • October 24th: Overbass, Hommage à Bérurier Noir, Self Control, Vermin Kaos
  • November 6th: Béton Armé, Faze, Slash Need, Chain Block
  • November 28th (Blood Moon Ritual 2025): Bhatt, Channeler, Pyrocene Death Cult, Masse d'Armes
  • December 13th (Stomp Records' 30th Anniversary Bash): The Planet Smashers, The Flatliners, Wine Lips, The Anti-Queens, Crash ton rock

Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere.

As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2026!


  1. see the 2022, 2023 and 2024 entries 

03 January, 2026 05:00AM by Louis-Philippe Véronneau

January 02, 2026

hackergotchi for Joachim Breitner

Joachim Breitner

Seemingly impossible programs in Lean

In 2007, Martin Escardo wrote a often-read blog post about “Seemingly impossible functional programs”. One such seemingly impossible function is find, which takes a predicate on infinite sequences of bits, and returns an infinite sequence for which that predicate hold (unless the predicate is just always false, in which case it returns some arbitrary sequence).

Inspired by conversations with and experiments by Massin Guerdi at the dinner of LeaningIn 2025 in Berlin (yes, this blog post has been in my pipeline for far too long), I wanted to play around these concepts in Lean.

Let’s represent infinite sequences of bits as functions from Nat to Bit, and give them a nice name, and some basic functionality, including a binary operator for consing an element to the front:

import Mathlib.Data.Nat.Find

abbrev Bit := Bool

def Cantor : Type := Nat → Bit

def Cantor.head (a : Cantor) : Bit := a 0

def Cantor.tail (a : Cantor) : Cantor := fun i => a (i + 1)

@[simp, grind] def Cantor.cons (x : Bit) (a : Cantor) : Cantor
  | 0 => x
  | i+1 => a i

infix:60 " # " => Cantor.cons

With this in place, we can write Escardo’s function in Lean. His blog post discusses a few variants; I’ll focus on just one of them:

mutual
  partial def forsome (p : Cantor → Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor → Bool) : Cantor :=
    have b := forsome (fun a => p (true # a))
    (b # find (fun a => p (b # a)))
end

We define find together with forsome, which checks if the predicate p holds for any sequence. Using that find sets the first element of the result to true if there exists a squence starting with true, else to false, and then tries to find the rest of the sequence.

It is a bit of a brian twiter that this code works, but it does:

def fifth_false : Cantor → Bool := fun a => not (a 5)

/-- info: [true, true, true, true, true, false, true, true, true, true] -/
#guard_msgs in
#eval List.ofFn (fun (i : Fin 10) => find fifth_false i)

Of course, in Lean we don’t just want to define these functions, but we want to prove that they do what we expect them to do.

Above we defined them as partial functions, even though we hope that they are not actually partial: The partial keyword means that we don’t have to do a termination proof, but also that we cannot prove anything about these functions.

So can we convince Lean that these functions are total after all? We can, but it’s a bit of a puzzle, and we have to adjust the definitions.

First of all, these “seemingly impossible functions” are only possible because we assume that the predicate we pass to it, p, is computable and total. This is where the whole magic comes from, and I recommend to read Escardo’s blog posts and papers for more on this fascinating topic. In particular, you will learn that a predicate on Cantor that is computable and total necessarily only looks at some initial fragment of the sequence. The length of that prefix is called the “modulus”. So if we hope to prove termination of find and forsome, we have to restrict their argument p to only such computable predicates.

To that end I introduce HasModulus and the subtype of predicates on Cantor that have such a modulus:

-- Extensional (!) modulus of uniform continuity
def HasModulus (p : Cantor → α) := ∃ n, ∀ a b : Cantor, (∀ i < n, a i = b i) → p a = p b

@[ext] structure CantorPred where
  pred : Cantor → Bool
  hasModulus : HasModulus pred

The modulus of such a predicate is now the least prefix lenght that determines the predicate. In particular, if the modulus is zero, the predicate is constant:

namespace CantorPred

variable (p : CantorPred)

noncomputable def modulus : Nat :=
  open Classical in Nat.find p.hasModulus

theorem eq_of_modulus : ∀a b : Cantor, (∀ i < p.modulus, a i = b i) → p a = p b := by
  open Classical in
  unfold modulus
  exact Nat.find_spec p.hasModulus

theorem eq_of_modulus_eq_0 (hm : p.modulus = 0) : ∀ a b, p a = p b := by
  intro a b
  apply p.eq_of_modulus
  simp [hm]

Because we want to work with CantorPred and not Cantor → Bool I have to define some operations on that new type; in particular the “cons element before predicate” operation that we saw above in find:

def comp_cons (b : Bit) : CantorPred where
  pred := fun a => p (b # a)
  hasModulus := by
    obtain ⟨n, h_n⟩ := p.hasModulus
    cases n with
    | zero => exists 0; grind
    | succ m =>
      exists m
      intro a b heq
      simp
      apply h_n
      intro i hi
      cases i
      · rfl
      · grind

@[simp, grind =] theorem comp_cons_pred (x : Bit) (a : Cantor) :
  (p.comp_cons x) a = p (x # a) := rfl

For this operation we know that the modulus decreases (if it wasn’t already zero):

theorem comp_cons_modulus (x : Bit) :
    (p.comp_cons x).modulus ≤ p.modulus - 1 := by
  open Classical in
  apply Nat.find_le
  intro a b hab
  apply p.eq_of_modulus
  cases hh : p.modulus
  · simp
  · intro i hi
    cases i
    · grind
    · grind
grind_pattern comp_cons_modulus => (p.comp_cons x).modulus

We can rewrite the find function above to use these operations:

mutual
  partial def forsome (p : CantorPred) : Bool := p (find p)

  partial def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
end

I have also eta-expanded the Cantor function returned by find; there is now a fun i => … i around the body. We’ll shortly see why that is needed.

Now have everything in place to attempt a termination proof. Before we do that proof, we could step back and try to come up with an informal termination argument.

  • The recursive call from forsome to find doesn’t decrease any argument at all. This is ok if all calls from find to forsome are decreasing.

  • The recursive call from find to find decreases the index i as the recursive call is behind the Cantor.cons operation that shifts the index. Good.

  • The recursive call from find to forsome decreases the modulus of the argument p, if it wasn’t already zero.

    But if was zero, it does not decrease it! But if it zero, then the call from forsome to find doesn’t actually need to call find, because then p doesn’t look at its argument.

We can express all this reasoning as a termination measure in the form of a lexicographic triple. The 0 and 1 in the middle component mean that for zero modulus, we can call forsome from find “for free”.

mutual
  def forsome (p : CantorPred) : Bool := p (find p)
  termination_by (p.modulus, if p.modulus = 0 then 0 else 1, 0)
  decreasing_by grind

  def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
  termination_by i => (p.modulus, if p.modulus = 0 then 1 else 0, i)
  decreasing_by all_goals grind
end

The termination proof doesn’t go through just yet: Lean is not able to see that (_ # p) i will call p with i - 1, and it does not see that p (find p) only uses find p if the modulus of p is non-zero. We can use the wf_preprocess feature to tell it about that:

The following theorem replaces a call to p f, where p is a function parameter, with the slightly more complex but provably equivalent expression on the right, where the call to f is no in the else branch of an if-then-else and thus has ¬p.modulus = 0 in scope:

@[wf_preprocess]
theorem coe_wf (p : CantorPred) :
    (wfParam p) f = p (if _ : p.modulus = 0 then fun _ => false else f) := by
  split
  next h => apply p.eq_of_modulus_eq_0 h
  next => rfl

And similarly we replace (_ # p) i with a variant that extend the context with information on how p is called:

def cantor_cons' (x : Bit) (i : Nat) (a : ∀ j, j + 1 = i → Bit) : Bit :=
  match i with
  | 0 => x
  | j + 1 => a j (by grind)

@[wf_preprocess] theorem cantor_cons_congr (b : Bit) (a : Cantor) (i : Nat) :
  (b # a) i = cantor_cons' b i (fun j _ => a j) := by cases i <;> rfl

After these declarations, the above definition of forsome and find goes through!

It remains to now prove that they do what they should, by a simple induction on the modulus of p:

@[simp, grind =] theorem tail_cons_eq (a : Cantor) : (x # a).tail = a := by
  funext i; simp [Cantor.tail, Cantor.cons]

@[simp, grind =] theorem head_cons_tail_eq (a : Cantor) : a.head # a.tail = a := by
  funext i; cases i <;> rfl

theorem find_correct (p : CantorPred) (h_exists : ∃ a, p a) : p (find p) := by
  by_cases h0 : p.modulus = 0
  · obtain ⟨a, h_a⟩ := h_exists
    rw [← h_a]
    apply p.eq_of_modulus_eq_0 h0
  · rw [find.eq_unfold, forsome.eq_unfold]
    dsimp -zeta
    extract_lets b
    change p (_ # _)
    by_cases htrue : ∃ a, p (true # a)
    next =>
      have := find_correct (p.comp_cons true) htrue
      grind
    next =>
      have : b = false := by grind
      clear_value b; subst b
      have hfalse : ∃ a, p (false # a) := by
        obtain ⟨a, h_a⟩ := h_exists
        cases h : a.head
        · exists Cantor.tail a
          grind
        · exfalso
          apply htrue
          exists Cantor.tail a
          grind
      clear h_exists
      exact find_correct (p.comp_cons false) hfalse
termination_by p.modulus
decreasing_by all_goals grind

theorem forsome_correct (p : CantorPred) :
    forsome p ↔ (∃ a, p a) where
  mp hfind := by unfold forsome at hfind; exists find p
  mpr hex := by unfold forsome; exact find_correct p hex

This is pretty nice! However there is more to do. For example, Escardo has a “massively faster” variant of find that we can implement as a partial function in Lean:

def findBit (p : Bit → Bool) : Bit :=
  if p false then false else true

def branch (x : Bit) (l r : Cantor) : Cantor :=
  fun n =>
    if n = 0      then x
    else if 2 ∣ n then r ((n - 2) / 2)
                  else l ((n - 1) / 2)

mutual
  partial def forsome (p : Cantor -> Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor -> Bool) : Cantor :=
    let x := findBit (fun x => forsome (fun l => forsome (fun r => p (branch x l r))))
    let l := find (fun l => forsome (fun r => p (branch x l r)))
    let r := find (fun r => p (branch x l r))
    branch x l r
end

But can we get this past Lean’s termination checker? In order to prove that the modulus of p is decreasing, we’d have to know that, for example, find (fun r => p (branch x l r)) is behaving nicely. Unforunately, it is rather hard to do termination proof for a function that relies on the behaviour of the function itself.

So I’ll leave this open as a future exercise.

I have dumped the code for this post at https://github.com/nomeata/lean-cantor.

02 January, 2026 02:30PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Ben Hutchings

Ben Hutchings

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 1

I remember that when Git was new and hip (around 2005), one of the supposed advantages was that “merging is so great!”. Well, to be honest, the competition at the time (mostly CVS and Subversion) wasn't fantastic, so I guess it was a huge improvement, but it's still… problematic. And this is even more visible when trying to rewrite history.

The case in question was that I needed to move Stockfish's cluster (MPI) branch up-to-date with master, which nobody had done for a year and and a half because there had been a number of sort-of tricky internal refactorings that caused merge conflicts. I fairly quickly realized that just doing “git merge master” would create a huge mess of unrelated conflicts that would be impossible to review and bisect, so I settled on a different strategy: Take one conflict at a time.

So I basically merged up as far as I could without any conflicts (essentially by bisecting), noted that as a merge commit, then merged one conflicting commit, noted that as another merge (with commit notes if the merge was nontrivial, e.g., if it required new code or a new approach), and then repeat. Notably, Git doesn't seem to have any kind of native support for this flow; I did it manually at first, and then only later realized that there were so many segments (20+) that I should write a script to get everything consistent. Notably, this approach means that a merge commit can have significant new code that was not in either parent. (Git does support this kind of flow, because a commit is just a list of zero or more parent commits and then the contents of the entire tree; git show does a diff on-the-fly, and object deduplication and compression makes this work without ballooning the size. But it is still surprising to those that don't do a lot of merges.)

That's where the nice parts ended, and the problems began. (Even ignoring that a conflict-free merge could break the compile, of course.) Because I realized that while I had merged everything, it wasn't actually done; the MPI support didn't even compile, for one, and once I had fixed that, I realized that I wanted to fix typos in commit messages, fix bugs pointed out to me by reviewers, and so on. In short, I wanted to rewrite history. And that's not where Git shines.

Everyone who works with a patch-based review flow (as opposed to having a throwaway branch per feature with lots of commits like “answer review comments #13” and then squash-merging it or similar) will know that git's basic answer to this is git rebase. rebase essentially sets up a script of what commits you've done, then executes a script (potentially at a different starting point, so you could get conflicts). Interactive rebase simply lets you edit that script in various ways, so that you can e.g. modify a commit message on the way, or take out a commit, or (more interestingly) make changes to a commit before continuing.

However, when merges are involved, regular interactive rebase just breaks down completely. It assumes that you don't really want merges; you just want a nice linear series of commits. And that's nice, except that in this case, I wanted the merges because the entire point was to upmerge. So then I needed to invoke git rebase --rebase-merges, which makes the script language into a somewhat different one that's subtly different and vastly more complicated (it basically sets up a list of ephemeral branches as “labels” to specify the trees that are merged into the various merge commits). And this is fine—until you want to edit that script.

In particular, let's take a fairly trivial change: Modifying a commit message. The merge command in the rebase script takes in a commit hash that's only used for the commit message and nothing else (the contents of the tree are ignored), and you can choose to either use a different hash or modify the message in an editor after-the-fact. And you can try to do this, but… then you get a merge conflict later in the rebase. What?

It turns out that git has a native machinery for remembering conflict resolutions. It basically remembers that you tried to merge commit A and B and ended up committing C (possibly after manual conflict resolution); so any merge of A and B will cause git to look that up and just use C. But that's not what really happened; since you modified the commit message of A (or even just its commit date), it changed its hash and became A', and now you're trying to merge A' and B, for which git has no conflict resolution remembered, and you're back to square one and have to do the resolution yourself. I had assumed that the merge remembered how to merge trees, but evidently it's on entire commits.

But wait, I hear you say; the solution for this is git-rerere! rerere exists precisely for this purpose; it remembers conflict resolutions you've done before and tries to reapply them. It only remembers merge conflicts you did when rerere was actually active, but there's a contrib script to “learn” from before that time, which works OK. So I tried to run the learn script and run the rebase… and it stopped with a merge conflict. You see, git rerere doesn't stop the conflicts, it just resolves them and then you have to continue the rebase yourself from the shell as usual. So I did that 20+ times (I can tell you, this gets tedious real quick)… and ended up with a different result. The tree simply wasn't the same as before the merge, even though I had only changed a commit message.

See, the problem is that rerere remembers conflicts, not merges. It has to, in order to reach its goal of being able to reapply conflict resolutions even if other parts of the file have changed. (Otherwise, it would be only marginally more useful than git's existing native support, which we discussed earlier.) But in this case, two or more conflicts in the rebase looked too similar to each other, yet needed different resolutions. So it picked the wrong resolution and ended up with a silent mismerge. And there's no way to guide it towards which one should apply when, so rerere was also out of the question.

This post is already long enough as it is; next time, we'll discuss the (horrible) workaround I used to actually (mostly) solve the problem.

02 January, 2026 09:50AM

Birger Schacht

Status update, December 2025

December 2025 started off with a nice event, namely a small gathering of Vienna based DDs. Some of us were at DebConf25 in Brest and we thought it might be nice to have a get-together of DDs in Vienna. A couple of months after DebConf25 I picked up the idea, let someone else ping the DDs, booked a table at a local cafe and in the end we were a group of 6 DDs. It was nice to put faces to names, names to nicknames and to hear what people are up to. We are definitely planning to repeat that!

December also ended with a meeting of nerds: the 39th Chaos Communication Congress in Hamburg. As usual, I did not really have that much time to watch many talks. I tend to bookmark a lot of them in the scheduling app in advance, but once I’m at the congress the social aspect is much more important and I try to only attend workshop or talks that are not recorded. Watching the recordings afterward is possible anyway (and I actually try to do that!).

There was also a Debian Developers meetup at day 3, combined with the usual time confusion regarding UTC and CET. We talked about having a Debian table at 40c3, so maybe the timezone won’t be that much of a problem in the next time.

Two talks I recommend are CSS Clicker Training: Making games in a “styling” language and To sign or not to sign: Practical vulnerabilities in GPG & friends.

Regarding package uploads this month did not happen that much, I only uploaded the new version (0.9.3) of labwc.

I created two new releases for carl. First a 0.5 release that adds Today and SpecifiedDate as properties. I forwarded an issue about dates not being parsed correctly to the icalendar issue tracker and this was fixed a couple of days later (thanks!). I then created a 0.5.1 release containing that fix. I also started planning to move the carl repository back to codeberg, because Github feels more and more like an AI Slop platform.

The work on debiverse also continued. I removed the tailwind CSS framework, and it was actually not that hard to reproduce all the needed CSS classes with custom CSS. I think that CSS frameworks make sense to a point, but once you start implementing stuff that the framework does not provide, it is easier if everything comes out of one set of rules. There was also the article Vanilla CSS is all you need which goes into the same direction and which gave me some ideas how to organize the CSS directives.

I also refactored the filter generation for the listing filters and the HTML filter form is now generated from the FastAPI Query Parameter Model.

Screenshot of the filter form

For navigation I implemented a sidebar, that is hidden on small screens but can be toggled using a burger menu.

Screenshot of the navigation bar

I also stumbled upon An uncomfortable but necessary discussion about the Debian bug tracker, which raises some valid points. I think debiverse could be a solution to the first point of “What could be a way forward?”, namely: “Create a new web service that parses the existing bug data and displays it in a “rich” format”.

But if there is ever another way than email to interact with bugs.debian.org, than this approach should not rely on passing on the commands via mail. If I click a button in a web interface to raise the severity, the severity should be raised right away - not 10 minutes later when the email is received. I think the individual parts (web, database, mail interface) should be decoupled and talk to each other via APIs.

02 January, 2026 05:28AM

January 01, 2026

Dima Kogan

Using libpython3 without linking it in; and old Python, g++ compatibility patches

I just released mrcal 2.5; much more about that in a future post. Here, I'd like to talk about some implementation details.

libpython3 and linking

mrcal is a C library and a Python library. Much of mrcal itself interfaces the C and Python libraries. And it is common for external libraries to want to pass Python mrcal.cameramodel objects to their C code. The obvious way to do this is in a converter function in an O& argument to PyArg_ParseTupleAndKeywords(). I wrote this mrcal_cameramodel_converter() function, which opened a whole can of worms when thinking about the compiling and linking and distribution of this thing.

mrcal_cameramodel_converter() is meant to be called by code that implements Python-wrapping of C code. This function will be called by the PyArg_ParseTupleAndKeywords() Python library function, and it uses the Python C API itself. Since it uses the Python C API, it would normally link against libpython. However:

  • The natural place to distribute this is in libmrcal.so, but this library doesn't touch Python, and I'd rather not pull in all of libpython for this utility function, even in the 99% case when that function won't even be called
  • In some cases linking to libpython actually breaks things, so I never do that anymore anyway. This is fine: since this code will only ever be called by libpython itself, we're guaranteed that libpython will already be loaded, and we don't need to ask for it.

OK, let's not link to libpython then. But if we do that, we're going to have unresolved references to our libpython calls, and the loader will complain when loading libmrcal.so, even if we're not actually calling those functions. This has an obvious solution: the references to the libpython calls should be marked weak. That won't generate unresolved-reference errors, and everything will be great.

OK, how do we mark things weak? There're two usual methods:

  1. We mark the declaration (or definition?) or the relevant functions with __attribute__((weak))
  2. We weaken the symbols after the compile with objcopy --weaken.

Method 1 is more work: I don't want to keep track of what Python API calls I'm actually making. This is non-trivial, because some of the Py_...() invocations in my code are actually macros that call functions internally that I must weaken. Furthermore, all the functions are declared in Python.h that I don't control. I can re-declare stuff with __attribute__((weak)), but then I have to match the prototypes. And I have to hope that re-declaring these will make __attribute__((weak)) actually work.

So clearly I want method 2. I implemented it:

python-cameramodel-converter.o: %.o:%.c
        $(c_build_rule); mv $@ _$@
        $(OBJCOPY) --wildcard --weaken-symbol='Py*' --weaken-symbol='_Py*' _$@ $@

Works great on my machine! But doesn't work on other people's machines. Because only the most recent objcopy tool actually works to weaken references. Apparently the older tools only weaken definitions, which isn't useful to me, and the tool only started handling references very recently.

Well that sucks. I guess I will need to mark the symbols with __attribute__((weak)) after all. I use the nm tool to find the symbols that should be weakened, and I apply the attribute with this macro:

#define WEAKEN(f) extern __typeof__(f) f __attribute__((weak));

The prototypes are handled by __typeof__. So are we done? With gcc, we are done. With clang we are not done. Apparently this macro does not weaken symbols generated by inline function calls if using clang I have no idea if this is a bug. The Python internal machinery has some of these, so this doesn't weaken all the symbols. I give up on the people that both have a too-old objcopy and are using clang, and declare victory. So the logic ends up being:

  1. Compile
  2. objcopy --weaken
  3. nm to find the non-weak Python references
  4. If there aren't any, our objcopy call worked and we're done!
  5. Otherwise, compile again, but explicitly asking to weaken those symbols
  6. nm again to see if the compiler didn't do it
  7. If any non-weak references still remain, complain and give up.

Whew. This logic appears here and here. There were even more things to deal with here: calling nm and objcopy needed special attention and build-system support in case we were cross-building. I took care of it in mrbuild.

This worked for a while. Until the converter code started to fail. Because ….

Supporting old Python

…. I was using PyTuple_GET_ITEM(). This is a macro to access PyTupleObject data. So the layout of PyTupleObject ended up encoded in libmrcal.so. But apparently this wasn't stable, and changed between Python3.13 and Python3.14. As described above, I'm not linking to libpython, so there's no NEEDED tag to make sure we pull in the right version. The solution was to call the PyTuple_GetItem() function instead. This is unsatisfying, and means that in theory other stuff here might stop working in some Python 3.future, but I'm ready to move on for now.

There were other annoying gymnastics that had to be performed to make this work with old-but-not-super old tooling.

The Python people deprecated PyModule_AddObject(), and added PyModule_Add() as a replacement. I want to support Pythons before and after this happened, so I needed some if statements. Today the old function still works, but eventually it will stop, and I will have needed to do this typing sooner or later.

Supporting old C++ compilers

mrcal is a C project, but it is common for people to want to #include the headers from C++. I widely use C99 designated initializers (27-years old in C!), which causes issues with not-very-old C++ compilers. I worked around this initialization in one spot, and disabled it a feature for a too-old compiler in another spot. Fortunately, semi-recent tooling supports my usages, so this is becoming a non-issue as time goes on.

01 January, 2026 09:52PM by Dima Kogan

Russ Allbery

2025 Book Reading in Review

In 2025, I finished and reviewed 32 books, not counting another five books I've finished but not yet reviewed and which will therefore roll over to 2026.

This was not a great reading year, although not my worst reading year since I started keeping track. I'm not entirely sure why, although part of the explanation was that I hit a bad stretch of books in spring of 2025 and got into a bit of a reading slump. Mostly, though, I shifted a lot of reading this year to short non-fiction (newsletters and doom-scrolling) and spent rather more time than I intended watching YouTube videos, and sadly each hour in the day can only be allocated one way.

This year felt a bit like a holding pattern. I have some hopes of being more proactive and intentional in 2026. I'm still working on finding a good balance between all of my hobbies and the enjoyment of mindless entertainment.

The best book I read this year was also the last book I reviewed (and yes, I snuck the review under the wire for that reason): Bethany Jacobs's This Brutal Moon, the conclusion of the Kindom Trilogy that started with These Burning Stars. I thought the first two books of the series were interesting but flawed, but the conclusion blew me away and improved the entire trilogy in retrospect. Like all books I rate 10 out of 10, I'm sure a large part of my reaction is idiosyncratic, but two friends of mine also loved the conclusion so it's not just me.

The stand-out non-fiction book of the year was Rory Stewart's Politics on the Edge. I have a lot of disagreements with Stewart's political positions (the more I listen to him, the more disagreements I find), but he is an excellent memoirist who skewers the banality, superficiality, and contempt for competence that has become so prevailing in centrist and right-wing politics. It's hard not to read this book and despair of electoralism and the current structures of governments, but it's bracing to know that even some people I disagree with believe in the value of expertise.

I also finished Suzanne Palmer's excellent Finder Chronicles series, reading The Scavenger Door and Ghostdrift. This series is some of the best science fiction I've read in a long time and I'm sad it is over (at least for now). Palmer has a new, unrelated book coming in 2026 (Ode to the Half-Broken), and I'm looking forward to reading that.

This year, I experimented with re-reading books I had already reviewed for the first time since I started writing reviews. After my reading slump, I felt like revisiting something I knew I liked, and therefore re-read C.J. Cherryh's Cyteen and Regenesis. Cyteen mostly held up, but Regenesis was worse than I had remembered. I experimented with a way to add on to my previous reviews, but I didn't like the results and the whole process of re-reading and re-reviewing annoyed me. I'm counting this as a failed experiment, which means I've still not solved the problem of how to revisit series that I read long enough ago that I want to re-read them before picking up the new book. (You may have noticed that I've not read the new Jacqueline Carey Kushiel novel, for example.)

You may have also noticed that I didn't start a new series re-read, or continue my semi-in-progress re-reads of Mercedes Lackey or David Eddings. I have tentative plans to kick off a new series re-read in 2026, but I'm not ready to commit to that yet.

As always, I have no firm numeric goals for the next year, but I hope to avoid another reading slump and drag my reading attention back from lower-quality and mostly-depressing material in 2026.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2026 09:12PM

hackergotchi for Daniel Pocock

Daniel Pocock

Crans-Montana: Le Constellation ownership, Jacques Moretti and Jessica Maric, Lens (CH)

News reports have appeared about an explosion at the bar Le Constellation in Crans-Montana, Switzerland.

A 2016 news report from Le Nouvelliste quotes the owner of the bar around his acquisition of the establishment:

Coup de foudre! Ce commerçant corse décide d’ouvrir une affaire. Ça tombe bien, le Constellation, au centre de Crans, est à remettre. Il faudra attendre juin 2015 pour signer un accord. Durant six mois, Jacques Moretti retrousse ses manches et relooke l’établissement. «J’ai quasiment tout fait moi-même. Regardez ces murs, il y a 14 tonnes de pierres sèches, elles viennent de Saint-Léonard!» Depuis décembre 2015, le Constellation sert d’écrin aux produits corses. Charcuteries, vins, bières, liqueur de myrte et même whisky au parfum de châtaigne. «Mais attention, j’ai à cœur de présenter aussi le terroir valaisan. Vous avez de très bons vins, c’est un plaisir de les servir à mes clients.» Le Corse avoue se sentir très bien chez nous. «Vous savez, on est pareil. On est d’abord des montagnards. Avec la tête dure, mais surtout avec beaucoup de gentillesse.»

Translated to English:

(Translated to English) It was love at first sight! This Corsican businessman decided to open a business. Conveniently, Le Constellation, in the center of Crans, was for sale. He had to wait until June 2015 to sign the agreement. For six months, Jacques Moretti rolled up his sleeves and completely renovated the establishment. "I did almost everything myself. Look at these walls, there are 14 tons of dry-stone masonry, and the stones come from Saint-Léonard!" Since December 2015, Le Constellation has been showcasing Corsican products: cured meats, wines, beers, myrtle liqueur, and even chestnut-flavored whisky. "But mind you, I'm also keen to feature local Valais products. You have some excellent wines here; it's a pleasure to serve them to my customers." The Corsican admits he feels very much at home here. "You know, we're alike. We're both mountain people at heart. Stubborn, perhaps, but above all, very kind."

Jacques Moretti on LinkedIn.

The news report notes he did everything himself but doesn't pose questions about whether controlled works, such as the electrical, gas and fire safety systems, were a DIY.

The Facebook page for the bar has been taken down. The bar published a profile, with pictures and contact details, on the site of the local tourist office.

These are the details the owners chose to make public:

Rue Centrale 35
3963 Crans-Montana
constellationcransmontana@gmail.com
+41 78 717 14 86
www.facebook.com/leconstellation

Switzerland has 26 cantons and each canton maintains its own business register.

I previously had to research the scandal involving an illegal legal insurance scheme being operated across the border between Switzerland and France. The presence of records about multiple nominee owners and business entities in different cantons made it hard to find the truth. Nonetheless, the truth came out on the JuristGate web site.

Le Constellation is in the Canton of Valais and the business records can be searched in this public database.

The search reveals the owners are Jacques Moretti, a French citizen domiciled in Lens and Jesicca Maric, his spouse, who is also a French citizen.

The records mention they are domiciled in the Swiss village Lens, not to be confused with the French city of the same name.

Jessica Maric on LinkedIn

Searching for their names finds other businesses, including Le Vieux Chalet 1978, and Le Senso.

More links

Article about the couple in Altitude Immobilier magazine

Blog about their other restaurant from food critic Gilles Pudlowski.

To read more about researching businesses in Switzerland, please see the JuristGate web site.

01 January, 2026 07:30AM

Russ Allbery

Review: This Brutal Moon

Review: This Brutal Moon, by Bethany Jacobs

Series: Kindom Trilogy #3
Publisher: Orbit
Copyright: December 2025
ISBN: 0-316-46373-6
Format: Kindle
Pages: 497

This Brutal Moon is a science fiction thriller with bits of cyberpunk and space opera. It concludes the trilogy begun with These Burning Stars. The three books tell one story in three volumes, and ideally you would read all three in close succession.

There is a massive twist in the first book that I am still not trying to spoil, so please forgive some vague description.

At the conclusion of These Burning Stars, Jacobs had moved a lot of pieces into position, but it was not yet clear to me where the plot was going, or even if it would come to a solid ending in three volumes as promised by the series title. It does. This Brutal Moon opens with some of the political maneuvering that characterized These Burning Stars, but once things start happening, the reader gets all of the action they could wish for and then some.

I am pleased to report that, at least as far as I'm concerned, Jacobs nails the ending. Not only is it deeply satisfying, the characterization in this book is so good, and adds so smoothly to the characterization of the previous books, that I saw the whole series in a new light. I thought this was one of the best science fiction series finales I've ever read. Take that with a grain of salt, since some of those reasons are specific to me and the mood I was in when I read it, but this is fantastic stuff.

There is a lot of action at the climax of this book, split across at least four vantage points and linked in a grand strategy with chaotic surprises. I kept all of the pieces straight and understood how they were linked thanks to Jacobs's clear narration, which is impressive given the number of pieces in motion. That's not the heart of this book, though. The action climax is payoff for the readers who want to see some ass-kicking, and it does contain some moving and memorable moments, but it relies on some questionable villain behavior and a convenient plot device introduced only in this volume. The action-thriller payoff is competent but not, I think, outstanding.

What put this book into a category of its own were the characters, and specifically how Jacobs assembles sweeping political consequences from characters who, each alone, would never have brought about such a thing, and in some cases had little desire for it.

Looking back on the trilogy, I think Jacobs has captured, among all of the violence and action-movie combat and space-opera politics, the understanding that political upheaval is a relay race. The people who have the personalities to start it don't have the personality required to nurture it or supply it, and those who can end it are yet again different. This series is a fascinating catalog of political actors — the instigator, the idealist, the pragmatist, the soldier, the one who supports her friends, and several varieties and intensities of leaders — and it respects all of them without anointing any of them as the One True Revolutionary. The characters are larger than life, yes, and this series isn't going to win awards for gritty realism, but it's saying something satisfyingly complex about where we find courage and how a cause is pushed forward by different people with different skills and emotions at different points in time. Sometimes accidentally, and often in entirely unexpected ways.

As before, the main story is interwoven with flashbacks. This time, we finally see the full story of the destruction of the moon of Jeve. The reader has known about this since the first volume, but Jacobs has a few more secrets to show (including, I will admit, setting up a plot device) and some pointed commentary on resource extraction economies. I think this part of the book was a bit obviously constructed, although the characterization was great and the visible junction points of the plot didn't stop me from enjoying the thrill when the pieces came together.

But the best part of this book was the fact there was 10% of it left after the climax. Jacobs wrote an actual denouement, and it was everything I wanted and then some. We get proper story conclusions for each of the characters, several powerful emotional gut punches, some remarkably subtle and thoughtful discussion of political construction for a series that tended more towards space-opera action, and a conclusion for the primary series relationship that may not be to every reader's taste but was utterly, perfectly, beautifully correct for mine. I spent a whole lot of the last fifty pages of this book trying not to cry, in the best way.

The character evolution over the course of this series is simply superb. Each character ages like fine wine, developing more depth, more nuance, but without merging. They become more themselves, which is an impressive feat across at least four very different major characters. You can see the vulnerabilities and know what put them there, you can see the strengths they developed to compensate, and you can see why they need the support the other characters provide. And each of them is so delightfully different.

This was so good. This was so precisely the type of story that I was in the mood for, with just the type of tenderness for its characters that I wanted, that I am certain I am not objective about it. It will be one of those books where other people will complain about flaws that I didn't see or didn't care about because it was doing the things I wanted from it so perfectly. It's so good that it elevated the entire trilogy; the journey was so worth the ending.

I'm afraid this review will be less than helpful because it's mostly nonspecific raving. This series is such a spoiler minefield that I'd need a full spoiler review to be specific, but my reaction is so driven by emotion that I'm not sure that would help if the characters didn't strike you the way that they struck me. I think the best advice I can offer is to say that if you liked the emotional tone of the end of These Burning Stars (not the big plot twist, the character reaction to the political goal that you learn drove the plot), stick with the series, because that's a sign of the questions Jacobs is asking. If you didn't like the characters at the end (not the middle) of the first novel, bail out, because you're going to get a lot more of that.

Highly, highly recommended, and the best thing I've read all year, with the caveats that you should read the content notes, and that some people are going to bounce off this series because it's too intense and melodramatic. That intensity will not let up, so if that's not what you're in the mood for, wait on this trilogy until you are.

Content notes: Graphic violence, torture, mentions of off-screen child sexual assault, a graphic corpse, and a whole lot of trauma.

One somewhat grumbly postscript: This is the sort of book where I need to not read other people's reviews because I'll get too defensive of it (it's just a book I liked!). But there is one bit of review commentary I've seen about the trilogy that annoys me enough I have to mention it. Other reviewers seem to be latching on to the Jeveni (an ethnic group in the trilogy) as Space Jews and then having various feelings about that.

I can see some parallels, I'm not going to say that it's completely wrong, but I also beg people to read about a fictional oppressed ethnic and religious minority and not immediately think "oh, they must be stand-ins for Jews." That's kind of weird? And people from the US, in particular, perhaps should not read a story about an ethnic group enslaved due to their productive skill and economic value and think "they must be analogous to Jews, there are no other possible parallels here." There are a lot of other comparisons that can be made, including to the commonalities between the methods many different oppressed minorities have used to survive and preserve their culture.

Rating: 10 out of 10

01 January, 2026 05:27AM

December 31, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy new year.

Happy new year.

31 December, 2025 10:42PM by Junichi Uekawa

hackergotchi for Bits from Debian

Bits from Debian

DebConf26 dates announced

Alt Debconf26 by Romina Molina

As announced in Brest, France, in July, the Debian Conference is heading to Santa Fe, Argentina.

The DebConf26 team and the local organizers team in Argentina are excited to announce Debconf26 dates, the 27th edition of the Debian Developers and Contributors Conference:

DebCamp, the annual hacking session, will run from Monday July 13th to Sunday to July 19th 2026, followed by DebConf from Monday July 20th to Saturday July 25th 2026.

For all those who wish to meet us in Santa Fe, the next step will be the opening of registration on January 26, 2026. The call for proposals period for anyone wishing to submit a conference or event proposal will be launched on the same day.

DebConf26 is looking for sponsors; if you are interested or think you know of others who would be willing to help, please have a look at our sponsorship page and get in touch with sponsors@debconf.org.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

For further information, please visit the DebConf26 web page at https://debconf26.debconf.org/ or send mail to press@debian.org.

Debconf26 is made possible by Proxmox and others.

31 December, 2025 05:00PM by Publicity team

hackergotchi for Chris Lamb

Chris Lamb

Favourites of 2025

Here are my favourite books and movies that I read and watched throughout 2025.

§

Books

Eliza Clark: Boy Parts (2020)
Rachel Cusk: The Outline Trilogy (2014—2018)
Edith Wharton: The House of Mirth (1905)
Michael Finkel: The Art Thief (2023)
Tony Judt: When the Facts Change: Essays 1995-2010 (2010)
Jennette McCurdy: I'm Glad My Mom Died (2022)
Joan Didion: The Year of Magical Thinking (2005)
Jill Lepore: These Truths: A History of the United States (2018)

§

Films

Recent releases

Disappointments this year included 28 Years Later (Danny Boyle, 2025), Cover-Up (Laura Poitras & Mark Obenhaus, 2025), Bugonia (Yorgos Lanthimos, 2025) and Caught Stealing (Darren Aronofsky, 2025).


Older releases

ie. Films released before 2024, and not including rewatches from previous years.

Distinctly unenjoyable watches included War of the Worlds (Rich Lee, 2025), Highest 2 Lowest (Spike Lee, 2025), Elizabethtown (Cameron Crowe, 2005), Crazy Rich Asians (Jon M. Chu, 2018) and Spinal Tap II: The End Continues (Rob Reiner, 2025).

On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Chinatown (Roman Polanski, 1974), Koyaanisqatsi (Godfrey Reggio, 1982), Heat (Michael Mann, 1995) and Night of the Hunter (Charles Laughton, 1955).


31 December, 2025 08:58AM

December 30, 2025

Russ Allbery

Review: Dark Ambitions

Review: Dark Ambitions, by Michelle Diener

Series: Class 5 #4.5
Publisher: Eclipse
Copyright: 2020
ISBN: 1-7637844-2-8
Format: Kindle
Pages: 81

Dark Ambitions is a science fiction romance novella set in Michelle Diener's Class 5 series, following the events of Dark Matters. It returns to Rose as the protagonist and in that sense is a sequel to Dark Horse, but you don't have to remember that book in detail to read this novella.

Rose and Dav (and the Class 5 ship Sazo) are escorting an exploration team to a planet that is being evaluated for settlement. Rose has her heart set on going down to the planet, feeling the breeze, and enjoying the plant life. Dav and his ship are called away to deal with a hostage situation. He tries to talk her out of going down without him, but Rose is having none of it. Predictably, hijinks ensue.

This is a very slight novella dropped into the middle of the series but not (at least so far as I can tell) important in any way to the overall plot. It provides a bit of a coda to Rose's story from Dark Horse, but given that Rose has made cameos in all of the other books, readers aren't going to learn much new here. According to the Amazon blurb, it was originally published in the Pets in Space 5 anthology. The pet in question is a tiny creature a bit like a flying squirrel that Rose rescues and that then helps Rose in exactly the way that you would predict in this sort of story.

This is so slight and predictable that it's hard to find enough to say about it to write a review. Dav is protective in a way that I found annoying and kind of sexist. Rose doesn't let that restrict her decisions, but seems to find this behavior more charming than I did. There is a tiny bit of Rose being awesome but a bit more damsel in distress than the series usually goes for. The cute animal is cute. There's the obligatory armory scene with another round of technomagical weapons that I think has appeared in every book in this series. It all runs on rather obvious rails.

There is a subplot involving Rose feeling some mysterious illness while on the planet that annoyed me entirely out of proportion to how annoying it is objectively, mostly because mysterious illnesses tend to ramp up my anxiety, which is not a pleasant reading emotion. This objection is probably specific to me.

This is completely skippable. I was told that in advance and thus only have myself to blame, but despite my completionist streak, I wish I'd skipped it. We learn one piece of series information that will probably come up in the future, but it's not the sort of information that would lead me to seek out a story about it. Otherwise, there's nothing wrong with it, really, but it would be a minor and entirely forgettable chapter in a longer novel, padded out with a cute animal and Dav trying to be smothering.

Not recommended just because you probably have something better to do with that reading time (reading the next full book of the series, for example), but there's nothing wrong with this if you want to read it anyway.

Followed by Dark Class.

Rating: 5 out of 10

30 December, 2025 06:19AM

December 28, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our study, 2025

We’re currently thinking of renovating our study/home office. I’ll likely write more about that project. Embarking on it reminded me that I’d taken a photo of the state of it nearly a year ago and forgot to post it, so here it is.

Home workspace, January 2025

Home workspace, January 2025

When I took that pic last January, it had been three years since the last one, and the major difference was a reduction in clutter. I've added a lava lamp (charity shop find) and Rob Sheridan print. We got rid of the POÄNG chair (originally bought for breast feeding) so we currently have no alternate seating besides the desk chair.

As much as I love my vintage mahogany writing desk, our current thinking is it’s likely to go. I’m exploring whether we could fit in two smaller desks: one main one for the computer, and another “workbench” for play: the synthesiser, Amiga, crafting and 3d printing projects, etc.

28 December, 2025 08:25AM

Balasankar 'Balu' C

Granting Namespace-Specific Access in GKE Clusters

Heyo,

In production Kubernetes environments, access control becomes critical when multiple services share the same cluster. I recently faced this exact scenario: a GKE cluster hosting multiple services across different namespaces, where a new team needed access to maintain and debug their service-but only their service.

The requirement was straightforward yet specific: grant external users the ability to exec into pods, view logs, and forward ports, but restrict this access to a single namespace within a single GKE cluster. No access to other clusters in the Google Cloud project, and no access to other namespaces.

The Solution

Achieving this granular access control requires combining Google Cloud IAM with Kubernetes RBAC (Role-Based Access Control). Here’s how to implement it:

Step 1: Tag Your GKE Cluster

First, apply a unique tag to your GKE cluster. This tag will serve as the identifier for IAM policies.

Step 2: Grant IAM Access via Tags

Add an IAM policy binding that grants users access to resources with your specific tag. The Kubernetes Engine Viewer role (roles/container.viewer) provides sufficient base permissions without granting excessive access.

Step 3: Create a Kubernetes ClusterRole

Define a ClusterRole that specifies the exact permissions needed:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-access-role
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/exec", "pods/attach", "pods/portforward", "pods/log"]
    verbs: ["get", "list", "watch", "create"]

Note: While you could use a namespace-scoped Role, a ClusterRole offers better reusability if you need similar permissions for other namespaces later.

Step 4: Bind the Role to Users

Create a RoleBinding to connect the role to specific users and namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-rolebinding
  namespace: my-namespace
subjects:
  - kind: User
    name: myuser@gmail.com
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: custom-access-role
  apiGroup: rbac.authorization.k8s.io

Apply both configurations using kubectl apply -f <filename>.

How It Works

This approach creates a two-layer security model:

  • GCP IAM controls which clusters users can access using resource tags
  • Kubernetes RBAC controls what users can do within the cluster and limits their scope to specific namespaces

The result is a secure, maintainable solution that grants teams the access they need without compromising the security of other services in your cluster.

28 December, 2025 06:00AM

December 25, 2025

Russ Allbery

Review: Machine

Review: Machine, by Elizabeth Bear

Series: White Space #2
Publisher: Saga Press
Copyright: October 2020
ISBN: 1-5344-0303-5
Format: Kindle
Pages: 485

Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care.

Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most.

As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible.

What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI.

Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong.

Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them.

I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice.

Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands.

It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying.

The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right.

That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do.

The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates.

It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me.

The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals.

It's who we are when nobody is looking.

Followed by The Folded Sky.

Rating: 8 out of 10

25 December, 2025 03:05AM

December 23, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Remarkable

Remarkable tablet displaying my 2025 planner PDF.

My Remarkable tablet, displaying my 2025 planner.

During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

23 December, 2025 10:58AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix

Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

Don't we owe it to each other to engage with actual human attention?

23 December, 2025 05:00AM by Daniel Kahn Gillmor

December 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

NanoKVM: I like it

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

22 December, 2025 05:38PM

Russell Coker

Samsung 65″ QN900C 8K TV

As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

Basically that Samsung allegedly 8K TV only works at 4K at best.

It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

Also I’m trying to sell my allegedly 8K TV.

22 December, 2025 07:52AM by etbe

François Marier

LXC setup on Debian forky

Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

Installing the required packages

Start by installing the necessary packages on the host:

apt install lxc libvirt-clients debootstrap

Network setup

Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

veth

and then loading it manually for now:

modprobe veth

Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

net.ipv4.ip_forward=1

and applying it:

sysctl -p /etc/sysctl.d/lxc-local.conf

Restart the LXC network bridge:

systemctl restart lxc-net.service

Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and applying the rules:

iptables-apply

Creating a container

To see all available images, run:

lxc-create -n foo --template=download -- --list

and then create a Debian forky container using:

lxc-create -n forky -t download -- -d debian -r forky -a amd64

Start and stop the container like this:

lxc-start -n forky
lxc-stop -n forky

Connecting to the container

Attach to the running container's console:

lxc-attach -n forky

Inside the container, you can change the root password by typing:

passwd

and install some essential packages:

apt install openssh-server vim

To find the container's IP address (for example, so that you can ssh to it from the host):

lxc-ls --fancy

22 December, 2025 02:47AM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20

commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000

    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow `extra_src` in `c_test_config.json` to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in `test_c` target.
    - **c_test_config.json:**
        - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of `test_pool`.
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing `test_util.h` include in new type test headers.
        - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated `xs_learnings.md` with details from the recent segfault.
        - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.


## 0.22	2025-12-19

commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000

    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
        *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
        *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
        *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
        *   Added `JSON::MaybeXS` to `PREREQ_PM`.
        *   The `test` target now also depends on the `test_c` target.
    
    2.  **C Test Infrastructure (`t/c/`):
        *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
        *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
        *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
        *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
        *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
        *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
    
    3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
    
    4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.


## 0.21	2025-12-18

commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000

    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.


## 0.20	2025-12-18

commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000

    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.


## 0.19	2025-12-18

commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000

    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for `libcoro`-based C tests
      (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
      integration milestones (050 through 220).
    - Updated `030_Integration_Protobuf.md` to standardise checklist
      items for the existing `030_protobuf_coro.c` test.
    - Removed the single `xt/author/coro-safe.t` item from
      `010_Build.md`.
    - Added checklist items for Perl-level `Coro` tests
      (e.g., `xt/coro/240_arena.t`) to each Perl layer
      integration milestone (240 through 400).
    - Created `perl/t/c/c_test_config.json` to manage C test
      configurations externally.
    - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
      both C-level `libcoro` and Perl-level `Coro` testing strategies.


## 0.18	2025-12-18

commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000

    added coro testing to c layer milestones


## 0.17	2025-12-18

commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000

    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of `Test::TestCoverage`.
    - Expanded Well-Known Types milestone (350) to detail each type.


## 0.16	2025-12-18

commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000

    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the `perl/doc/plan/` directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.


## 0.15	2025-12-17

commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000

    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to `[ ]` to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
    - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
    - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.


## 0.14	2025-12-17

commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000

    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in `PerlUpb_Arena_Get`.
    - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
      on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.


## 0.13	2025-12-17

commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000

    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in `t/c/` to better match the `xs` module structure:
        - `01-cache.c` -> `protobuf_obj_cache.c`
        - `02-arena.c` -> `protobuf_arena.c`
        - `03-utils.c` -> `protobuf_utils.c`
        - `04-convert.c` -> `convert.c`
        - `load_test.c` -> `upb_descriptor_load.c`
    - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
    - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
    - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
    - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
    - Added a skeleton for `t/c/convert.c` to test the conversion functions.
    - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.


## 0.12	2025-12-17

commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000

    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
    - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
    - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
    - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
    - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
    - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
    - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
    - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.


## 0.11	2025-12-17

commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000

    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
        -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
    -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
    -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.


## 0.10	2025-12-17

commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000

    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests (`t/c/`)**:
        *   Added `t/c/Makefile` to build standalone C tests.
        *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
        *   Implemented `t/c/01-cache.c` to test the object cache.
        *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
        *   Implemented `t/c/03-utils.c` to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache (`xs/protobuf.c`)**:
        *   Switched to using stringified pointers (`%p`) as hash keys for stability.
        *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
    
    3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
        *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
        *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
    
    4.  **Makefile.PL (`perl/Makefile.PL`)**:
        *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
        *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
        *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
    
    5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
        *   Added `use XSLoader;` to load the compiled XS code.
    
    6.  **New files `xs/util.h`**:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.


## 0.09	2025-12-17

commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000

    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
        -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
        -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
        -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
        -   Uses a Perl hash (`HV*`) for the cache.
        -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
        -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
        -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
        -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
    
    -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
    
    These changes establish a working C-level test environment for the XS components.


## 0.08	2025-12-17

commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000

    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.


## 0.07	2025-12-17

commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000

    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.


## 0.06	2025-12-17

commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000

    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
        *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        *   Updated `INC` paths to correctly locate the generated headers.
        *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
        *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
        *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
    
    3.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added `object-caching.md` and `c-perl-interface.md`.
        *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
    
    4.  **Testing:**
        *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.


## 0.05	2025-12-16

commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000

    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
    *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
    *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
    *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
    *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
    *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
    *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
    *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
    *   Updated namespaces in all documentation files under `perl/doc/`.
    *   Updated paths in `perl/.gitignore`.
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included `xs/*.c` files in the common object files list.
    *   Added `-I.` to the `INC` paths.
    *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
    *   Removed custom keys passed to `WriteMakefile` for postamble.
    *   `MY::postamble` now sources variables directly from the main script scope.
    *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
    *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
    *   Updated `clean` target to include `blib`.
    *   Added more modules to `TEST_REQUIRES`.
    *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
    
    **New Files:**
    
    *   `perl/lib/Protobuf.pm`
    *   `perl/lib/Protobuf/Descriptor.pm`
    *   `perl/lib/Protobuf/Descriptor.xs`
    *   `perl/t/01-load-protobuf-descriptor.t`
    *   `perl/t/02-descriptor.t`
    *   `perl/t/c/load_test.c`: Standalone C test for UPB.
    *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
    *   `perl/doc/architecture/upb-interfacing.md`
    *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
    *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
    
    **Other:**
    
    *   Updated `test_descriptor.bin` (binary change).
    *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.


## 0.04	2025-12-14

commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000

    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for `ProtoBuf::Message` object
    creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
        reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
        prevent the pool and its C `upb_DefPool` from being garbage collected
        while a descriptor is still in use.
    
    2.  **`ProtoBuf::DescriptorPool`:**
        *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
            `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
        *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
            store it in the descriptor's `_pool` attribute.
        *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *   `DEMOLISH`: Made safer by checking for attribute existence.
    
    3.  **`ProtoBuf::Message`:**
        *   Implemented using Moo with lazy builders for `_upb_arena` and
            `_upb_message`.
        *   `_descriptor` is a required argument to `new()`.
        *   XS functions added for creating the arena (`pb_msg_create_arena`) and
            the `upb_Message` (`pb_msg_create_upb_message`).
        *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
            descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
            for `upb_Message_New()`.
        *   `DEMOLISH`: Added to free the message's arena.
    
    4.  **`Makefile.PL`:**
        *   Added `-g` to `CCFLAGS` for debugging symbols.
        *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
    
    5.  **Tests:**
        *   `t/04_descriptor_pool.t`: Updated to check the structure of the
            returned `ProtoBuf::Descriptor`.
        *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
            test `ProtoBuf::Message->new()`.
    
    6.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect progress.
        *   Updated several files in `doc/architecture/` to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.


## 0.03	2025-12-14

commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000

    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
    2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
    3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
        *   Mapping `upb_DefPool *` to `T_PTR`.
        *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
        *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
    5.  **Test Data:**
        *   Added `perl/t/data/test.proto` with a sample message and enum.
        *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
        *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
    6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
        *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
        *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.


## 0.02	2025-12-14

commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000

    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
    2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
    4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
    8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
    
    Build Steps:
    1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
    2.  `cd perl`
    3.  `perl Makefile.PL`
    4.  `make`
    5.  `make test` (Currently has expected failures due to missing test data implementation).


## 0.01	2025-12-14

commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000

    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
        *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs (`.xs` files)**:
        *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
        *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
        *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
    
    *   **Perl Module Stubs (`.pm` files)**:
        *   `lib/ProtoBuf.pm`: Main module, loads XS.
        *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
        *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
        *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
    
    *   **Configuration Files**:
        *   `.gitignore`: Ignores build artifacts, editor files, etc.
        *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
        *   `.perltidyrc`: Configures perltidy for code formatting.
    
    *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
    
    *   **`typemap`**: Custom typemap for XS argument/result conversion.
    
    *   **Documentation (`doc/`)**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.


22 December, 2025 01:32AM by C.J. Collier