Andy Wingo

@wingo

soot, solar, sedimentation, sin, & 'centers

Good evening, friends. Tonight I have a few loosely-knit stories.

soot

A couple years ago, my house was heated by a condensing gas boiler. It was awful from both an environmental and a geopolitical perspective: environmental, as I would emit somewhere around 2.5 tons of CO2 equivalent per year to heat my home, which compares poorly to the target total CO2e emissions of 2 tons per year per person; and geopolitical, because although France gets 40% of its gas from Norway, with whom we have no beef, all the rest is a problem in some way. (Algeria, 10%, is the least of my worries; the 20% for Russia and the US respectively are the most, followed by 10% for the Gulf states.)

Still, natural gas is better than fuel oil, which we had at my former rental house. It is a lamentably visceral experience to call up the fuel provider and say, yes, s’il vous plaît, can you drive a diesel-powered tanker truck out to my house, unroll the hose, and pour out 1500 liters of toxic fuel oil into a tank under my garden. Yes, I will just burn it all. Sure, see you again next year.

Some friends of mine recently had their fuel boiler die, which is itself an experience: one of them came over to visit, completely covered in soot, saying that the chimneysweep (whom he also has to call every year) said that his boiler is on its way out, that the chimney is completely clogged, and now because of the cleaning his basement is also covered in soot; awful. What to replace it with? Apparently despite the prohibition on new fuel-oil boiler installs, it might be possible to just install a new one; or they could hook up to natural gas from the street; or they could install a heat pump. Which to do?

To all these questions there is a moral answer, which we can phrase in terms in CO2 emissions and localized PM2.5 pollution, and it is always and everywhere to stop burning things. But fortunately we don’t need to rely only on moralism: electrification is just better, in essentially all ways. Owning and operating an electric car is a better experience than a petrol car. Induction stoves are better than gas; I know, I did not believe this for the longest time, but I was wrong. The experience of using a heat pump is pretty much equivalent to gas, so it’s a harder sell, but it is a relief to no longer have a pressurized methane tube connected to my house.

In the end, I think my neighbors are going to go for the heat pump, despite the 20k€ price tag, labor included. (Oddly, I think the deciding factor was that my neighbor confessed to having had a long chat with an AI chatbot, after which she felt she had a good understanding of the proposed solution and its tradeoffs; make of that what you will!)

solar

In late November I got some brave lads to install nineteen solar panels on my roof. Each of these magic rectangles can make up to 500W of power in optimal conditions, but my house faces south, with the roof inclined east and west, so it’s unlikely that I will ever hit the full 9.5 kW of potential power.

graph of consumed and produced electricity in december, showing a more or less constant 2 kW use, with a tiny hump of production in midday, peaking at, like, 150W

December was... very dark. The panels produced a total of 145 kWh over the month, but I used 1250 kWh of electricity, essentially all to run the heat pump. I live in a basin that is mostly covered by low clouds from November to February, and slanty photons couldn’t make much headway through the fog. The house is well-insulated (20-25 cm of wood-fiber exterior insulation on sides, 40 under the roof, though it is an old house with a few less-insulated bits), so it’s not that I am leaking lots of heat, and I have a combination of low-temperature floor heating and low-temperature radiators, so it’s not that I’m running the heat pump inefficiently to generate a too-high output temperature; it’s just, you know, cold in winter. A typical day would be between 1 and 5 degrees C. Cold; cold and dark.

Things got a little better in January: 285 kWh produced, though the heating needs are higher than in December, with 1450 kWh total consumed. In February we grew to 419 kWh produced, for 850 kWh consumed. In March we equalized, with about 850 kWh produced and consumed, but although the bulk of my consumption in this month is for heating, the “need” to heat overnight meant that I consume from the grid overnight, but feed in to the grid during the day. I have a small battery (7 kWh), but it’s not enough to store the “excess” electricity generated in a day; I should probably arrange to have the system heat only during the day in these months, to avoid taking from the grid.

graph of consumed and produced electricity in may, showing the battery never emptying, little power use, and a huge hump of production peaking at 7 kW

With practically no heating needs now, as you can imagine, I am just feeding a lot of excess to the grid. We’re halfway through May, just coming through a cold snap (the peasant lore is that we just passed the saints de glace, the date you need to wait for to plant crops that aren’t frost-hardy), but still we’ve produced more than twice as much as we’ve consumed (550 kWh vs 220 kWh), and essentially all the excess goes to the grid. The 7 kWh battery is quite enough to cover night-time electricity needs.

I didn’t know before, but often a solar panel installation doesn’t work when the grid is down. This is because the inverters that convert the DC from the panels to AC for the house need to match phase with the grid, and if the grid’s phase signal is down, they stop. It’s also for safety, so that line workers can repair downed lines without worrying that every house is a live wire. I spent a little extra to install a cutout that allows the house to run in “island mode” if the grid is down. We almost never have that situation here, though, but it seemed prudent that if we were going all-in on electricity, that perhaps we should take precautions.

When you buy a solar installation, you can either have little DC/AC inverters attached to the back of each panel (microinverters), or feed DC from all panels wired in series (they call them strings; there may be 2 or 3 of them in a home setup) to a central inverter. I have the latter. The panels happen to be assembled locally by MaviWatt, though surely the cells themselves are from China. My panels are installed on top of the ceramic roof tiles with little clips and an aluminum structure. (It used to be that sometimes panels would replace tiles and become the roof. That’s not done so much any more here.) Installation is, like, 60% of the price of solar. Often you need scaffolding, though my installers just used ladders; perhaps living in the mountains where I am, there are more people used to doing ropes and rock-climbing and such. I don’t think they took as much care of themselves as they should, though.

My inverter is made by Huawei (SUN2000), as is my battery and the cutout (“backup”) box. Some batteries have their own microinverter, allowing them to consume and produce AC, but this one is DC, hence the need to have the same brand as the inverter. It sends all my electricity usage data to China or something, so that it can send it to the app on my phone. It’s not ideal from an geopolitical perspective but it is good kit.

sedimentation

Although we haven’t hit the height of summer yet, I would like to offer a few observations that have precipitated out of solution.

Firstly, at least in my house, the baseline load without heating is pretty low: 200 or 300 watts or so. (I didn’t know this before looking at Huawei’s app.) We have a recently renovated, not tiny, but otherwise normal sort of house with, you know, the usual lot of modern conveniences, idle chargers plugged in here and there, and also my work computers and such, and it all runs on less than a handful of the old 60W bulbs. That’s interesting.

As far as actual load, there are only a few things that count: heating, when it’s cold; it can easily average 2 kW on a cold day. Plug in the electric car (I don’t have a wall box yet, just with the mains plug), that’s another kilowatt. I hardly drive, though, so it’s not a huge load. Using hot water is perhaps the most surprising thing: it can cause a spike up to 6 kW, over a short time, despite the heat coming from the heat pump; probably there is some tuning to do there. The oven and stove are little tiny blips. There’s the kettle, but it’s also a little blip. Nothing else matters: not the dishwasher, not the washing machine, nothing. You can leave the lights on all day and it just doesn’t matter.

Call me naïve, but I had hoped that solar would help my electricity usage in winter. This is simply not the case. Though the heat pump is efficient, there does not appear to be a magical energy solution for December, which is the bulk of my energy usage. My electricity bill is fixed-rate: 20 cents per kWh used. Using 4000 kWh or so from the grid over winter costs me 800€; annoying. I don’t have a natural before-and-after experiment as we added on to the house as we were renovating, but for context, in my previous poorly-insulated rental house that was half the size of this one, we’d pay 2000€ or so per year for heating oil. Perhaps I can lower the 800€ via variable-rate metering, to let the battery do some arbitrage, but there are some fundamental constraints that can’t be finagled away.

When I got my solar panels, I was resigned to never getting peak power, as they are on two different sections of the roof. It turns out that doesn’t matter: firstly, because 9.5 kW is a lot of power, as you can appreciate from the numbers above. I could never do anything with 9 kW. But secondly, because power isn’t equally valuable at different times of the day: by having east and west roof pitches, I can start producing earlier and continue producing later than if I had, say, a flat roof with panels tilted to the south. And the morning and the evening are the peak hours both for my house and for the grid, so that lets me consume more of my local production both when I need it and when the grid is under higher stress.

I was interested to hear that Alec Watson of Technology Connections had reservations about residential rooftop solar. I found a video in which he explains his perspective, which has a delightfully socialist character. His beef is partly due to the net metering scheme in some parts of the US, in which each kWh fed to the grid makes your meter run backwards; Watson finds it unfair, because it lets those wealthy households who have the capital to install solar to opt out of paying for the grid, which is a social good. In some cases, these households actually capture a part of what consumers pay for the grid, unlike industrial producers who are paid wholesale rates that don’t include transmission. Also, he finds it less efficient overall to install solar panels on houses rather than in bigger solar parks; each euro that society allocates to solar would go farther if we pooled them together.

Both points are interesting, but I would offer a couple responses. Firstly, at least in Europe, net metering is not really a thing; we have smart meters and I hear from friends in Portugal that there can even be a charge for grid injection at some times, if the grid is overloaded. France’s case is a bit weirder; I wouldn’t have gotten as large a system as I did, but there was a government program to offer a fixed buyback rate of 7 cents per kWh, stable for 20 years, if you installed more than 9 kW of panels. But given the lack of solar in December, I still pay the grid when I need energy the most.

Putting solar panels on roofs is indeed less efficient than putting them on a field. But, we are not in a situation of scarce solar panels: China could make another 350 GW of panels this year if there were demand. An incentive like the 7-cent buyback rate encourages capital allocation to solar, effectively calling these panels into existence. The bank loans me 20k€ at 4%, and the elimination of 3000 kWh that I would have bought from the grid in a year plus the 9000 kWh that I sell to the grid covers the cost entirely, and I get a life insurance policy on the remaining principal. It’s not a great investment financially but it doesn’t cost me anything either.

sin

As a person with a conscience, I have always experienced questions of energy as questions of sin; to leave a light on is not simply inefficient but a moral failing. Each kilometer a car travels on fossil fuel carries with it a quantum of guilt and must be justified in some way, otherwise a moral stain attaches.

Solar panels and electrification changes all this. 8 or 9 months out of the year, I live in a world of abundance: the electrical generation capacity that I have called into existence is free, clean, and much, much more than I need. Owning and operating a car still has externalities, but the emissions and cost aspects are entirely gone. It’s a funny feeling, and disorienting.

I grew up in the south of the US, where everyone has air conditioning. I came to see it as sinful, too; burning things and making emissions just so you could be a bit more comfortable. I haven’t lived in air conditioning since then, but it does get hot in summer, and I would be more comfortable if I could pump heat out of my house. Now I can. I have excess power available right when air conditioning (or, in my case, floor cooling) is needed. On a societal level, solar plus air conditioning is going to be a key part keeping our cities liveable while we ride out higher temperatures.

‘centers

It is with a sense of dissonance, then, that I have been experiencing Datacenter Discourse™: there is a lingering language of sin proceeding from an environmentalism born in penury, in a world in which every kilowatt-hour is precious and scarce. If China has unallocated capacity for another 350 GW of panels this year, why stress about a few GW of datacenters?

Of course, there are many aspects to these AI datacenters, but today I am just thinking about energy. Given that each GW of datacenter places extra demand on a grid, equivalent to 3 million times my home’s baseline load, or maybe 300 thousand of its winter load, if society wants this kind of datacenter to be a thing, it needs to add that amount of clean energy to the grid, with adequate battery storage to even out supply. We should, as a society, require this via legislation, because the market seems only too happy to use natural gas or even coal if it is marginally cheaper. At least if the datacenter boom busts, we’d be left with more clean energy production.

Conversely... and I don’t think I’m going too far here, but causing new fossil generation to come online in 2026, or even prolonging the life of existing generation, should result in the state confiscating all property of those responsible. (I have moderated my previous position, which was hanging.) Such people are not fit to live in society, so society should not allow them to own things.

Anyway. I think that those of us that wish “AI” were not a thing are losing the battle, and that we should prepare to fall back to more defensible positions; otherwise we risk a rout. A requirement to bring additional clean capacity online in sufficient amounts should be a baseline ask when it comes to datacenters. We have the productive capacity in the form of solar panels, at an affordable price, more than enough space in terms of the existing cropland that is inefficiently turned into ethanol to burn, batteries are a thing, and we just lack the political will to turn what could be into what is.

And as for AI datacenters themselves: there are enough aspects to argue about as it is. We do ourselves a disservice by weighing down the Discourse with outdated ideas of what is and isn’t possible.

This Week in GNOME

@thisweek

#249 Quality Over Quantity

Update on what happened across the GNOME project in the week from May 8 to May 15.

GNOME Circle Apps and Libraries

Graphs

Plot and manipulate data

Sjoerd Stendahl says

This week we released Graphs 2.0.

It’s been about two years since the last major feature-update, and this is by far our biggest update yet. On a technical level, the code base has known a major overhaul. Importing logic is written in a more modular way, making it possible to add parsers for new data types, and we rewrote a large part of the code-base from Python to Vala, which now stands for the majority of the code.

For the people who follow TWIG, some of this might sound familiar from the announcement of the beta, but we finally added support for some major long-requested changes. Most significantly, we finally have proper symbolic equation-support. Meaning equations now span over an infinite range, and can be manipulated analytically (e.g. doing a derivative of 6x² will change the equation to 12x, and the line will be re-rendered accordingly). Item and figure settings, such as when changing the scaling or limits, no longer block the view of the main canvas. The style editor editor has been redesigned with a live preview of the changes, we revamped the import dialog, and imported data now supports error bars. Equations with infinite values in them such as y=tan(x) now also render properly with values being drawn all the way to infinity and without having a line going from plus to minus infinity. We’ve also added support for spreadsheet and SQLite database files, drag-and-drop importing, improved curve fitting with residuals and better confidence bands, and now have proper mobile support. Since the beta-release we managed to squeeze in some improvements in the code-base, and labels are now concatinated in a smart way based on the screen size, making Graphs more usuable on mobile interfaces.

This release took a long time to get right, but we’re happy to get the new features to the public. Graphs is handcrafted by human hands, which takes more time than LLM-based slop. But the longer manual process does allow us to think through changes, and make intentional decisions with human care. I am very proud to say we are able to deliver something intentional where we can deliver the polish that both Graphs, as well as the users deserve. As always, thanks to anyone involved which includes everyone who has been providing feedback, reported issues, contributed with code, or helped in any other possible way. And of course especially to Christoph Matthias Kohnen who has been maintaining Graphs with me and is responsible for a large part of the architectural changes that made this release possible.

See a more complete list of changes here: https://blogs.gnome.org/sstendahl/2026/05/15/graphs-2-0-is-out/ And get the latest release on Flathub!

Third Party Projects

Anil reports

Codd is now available on Flathub!

Codd is a lightweight PostgreSQL client for GNOME, built with Rust, GTK4, libadwaita, Relm4, GtkSourceView, and sqlx. It focuses on a clean, native, and lightweight interface for working with PostgreSQL databases.

The initial release includes saved connections, SQL execution with syntax highlighting, result tables, query history, table browsing with pagination, filters and editable table cells.

More features are planned, and feedback from real-world PostgreSQL workflows would be very welcome.

Flathub: https://flathub.org/apps/io.github.anil_e.Codd Source: https://github.com/anil-e/codd

Haydn Trowell says

The latest version of Typesetter, the minimalist Typst editor, brings a bunch of improvements for package and template usage, most notably:

  • a GUI package manager for installing and removing custom Typst packages and templates;
  • a template selection popover in the header bar for creating new documents from built-in and user-installed templates;
  • and an initial set of built-in templates.

Flathub: https://flathub.org/apps/net.trowell.typesetter Source: https://codeberg.org/haydn/typesetter/

Alain says

Planify 4.19.2 is out! 🎉

This release brings several new features and fixes across the board.

On the backup side, Planify now supports automatic daily backups — backups trigger at midnight and can be copied to additional folders of your choice, including cloud-mounted directories.

CalDAV keeps getting better: sections are now fully supported via VTODO List prefix, syncing bidirectionally with Nextcloud Tasks, Thunderbird, and other clients. Horde server compatibility is also improved. Past dates can now be selected in the date picker, with dimmed styling and a handy Today pill button to jump back to the current month.

On the UI side, Planify now follows your GNOME system accent color, the Board view inbox section auto-hides when empty, and the multi-select label picker now correctly tracks only the changes you explicitly make.

Several bug fixes land too: calendar events now update correctly when kept open past midnight, task counts in Board view are accurate after drag and drop, and moving tasks between different sources (e.g. CalDAV → Local) now works correctly.

Get it on Flathub! 🚀

New to Planify? It’s a beautiful, open source task manager for Linux with Todoist and Nextcloud sync, built with GTK4 and libadwaita. Never worry about forgetting things again — give it a try!

Christian reports

🎉 Gitte 0.3.0 released!

Gitte 0.3.0 has been released, bringing full merge support, accessibility improvements, a new compact UI mode, and official macOS support.

The new merge workflow allows initiating, resolving, and completing merges directly from within the application. This release also adds a new compact UI mode, multi-selection support in the changed files list, and a release notes dialog with update notifications.

The diff viewer received major improvements as well: diffs in the log viewer can now be selected and copied and large diffs are handled more gracefully.

On the platform side, Gitte now supports macOS thanks to work by René de Hesselle. The app also received new GNOME-style icons by Jakub Steiner, expanded test coverage, CI integration, translation updates, and many internal refactorings and bug fixes.

Get it on Flathub, for macOS or check the source code

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Sjoerd Stendahl

@sstendahl

Graphs 2.0 is out!

After two years of development, Graphs 2.0 is finally out!

This will be a shorter blog, as the changelist of the new features have been discussed in the previous post in more detail, you can check this out here in more detail if you’re interested: https://blogs.gnome.org/sstendahl/2026/04/14/announcing-the-upcoming-graphs-2-0/

For a quick overview, a quick reprise of the most interesting features can be found here in bullet-point format:

    • New Data Types with proper equations: In this new release, we finally have proper support for equations, where equations are manipulated analytically (e.g. a derivative on y = 12x² will result in y = 24x, and be rendered accordingly), limits are rendered infinitely, and equations can be changed after importing them. To accomodate this change, we now have three different data types. Equations, Imported Data, and Generated Data. Imported Data is regular data that you import from file. Generated Data behaves the same as Imported Data, but you generate the dataset using an equation. For generate data you can also change the equation after the fact, and rerender, or change limits or amount of generated datapoints.
    • New Style Editor: We revamped the style editor. One major change is that you can now easily import new styles based on matplotlib-style themes, and you can also export your style and share it with others. If you have a nice style you want to share with us, open an issue on the GitLab page and let us know. We’re looking to expand the default choice of styles :). Furthermore, when editing  a style, you finally see a preview of how this affects the canvas. This way you don’t need to guess, or go back and forth when finetuning a style.
    • UX-changes: Whilst the general look-and-feel of Graphs is still mostly the same, we did make some UX-changes with how we handle settings. Mainly, instead of showing a modal popup dialog, settings that affect the canvas itself (i.e. item and figure settings) are now shown in the sidebar instead. The reason for this, is that the popup dialog hid the figure that you’re editing, making it difficult to see how your changes actually affect your canvas.
    • Improved data import. We revamped the data import completely. First of all, we have made the codebase here much more modular, making it easier to write new parsers for other filetypes. We now added support for SQLite Database Files, Microsoft Excel Sheets and .ods files from LibreOffice Calc. It’s also now possible to import multiple files at once, and finetuning the settings for each file individually. Another nice feature here is that you can now import multiple datasets from the same file without having to reimport them. Finally, we added proper-support also for single-column imports where x-data can be generated using your own equation
    • Error bar support: Graphs now has proper support for error bars. The error bar style can be set globally in the new style editor, or individually for each item.
    • Reworked Curve Fitting: The curve fitting logic has almost been completely rewritten. Whilst it mostly still works the same, the confidence band that is shown is now calculated properly using the Delta-method, instead of using a naive way using the limits of the standard deviations. We also added support to show the residuals to verify your fits, and added more useful error messages when things go wrong. The results in the curve fitting dialog now also show the root mean squared error as a second goodness-of-fit figure. The parameter values themselves in the curve fitting dialog are no longer rounded (e.g. 421302 used to be rounded to 421000) and finally custom equations in the curve fitting dialog now have an apply button, greatly improving the smoothness when entering new equations
    • Proper Mobile Support: With this release, we now officially feel confident in stating proper mobile support. The entire UI is tested on real mobile phones using PostMarketOS, and everything works properly. Labels are now also ellipsized earlier on narrow displays, so that the UI remains useable.
    • Reworked Figure Exports: Figure exports are reworked. Instead of simply taking a snapshot of the current canvas, you can set the actual size in pixels when exporting a figure. This is vital when trying to create reproducable figures for e.g. publications. Of course, you can also easily still use the same canvas that you see in the application.
    • Quality of life changes: We haven’t even gone through each individual change we made in the blog, but here’s a quick fire-round of more quality-of-life changes:
      • It is now possible to have multiple instances of Graphs open at the same time
      • The style editor now also has the option to draw tick labels (so the numeric values) on all axes containing ticks, this is not supported by default in Matplotlib, so we had to add our own parameter here
      • Graphs now inhibits the session when unsaved data is still open, so you get a warning if you close your computer with unsaved data.
      • Added support for base-2 logarithmic scaling
      • Graphical fixes for the drag-drop animations, which used to look somewhat glitchy
      • Panning and zooming are now done consistently on all axes when using multiple axes
      • Data can now be imported by drag-and-drop into Graphs
      • The subtitle now also shows the full file path for Flatpaks
      • Limits can now easily be clicking on the numbers near the axes
      • The custom transformation has gained the following extra variables: x_mean, y_mean, x_median, y_median, x_std, y_std and counts
      • Warnings are now displayed when trying to open a project from a beta version
      • The many code refactors and reimplementations from Python to Vala makes the application more robust, and significantly more performance. Especially when working with larger datasets.

This release took a long time to get right, but we’re happy to get the new features to the public. Graphs is handcrafted by human hands, which takes more time than LLM-based slop. But the longer manual process does allow us to think through changes, and make intentional decisions with human care. I am very proud to say we are able to deliver something intentional where we can deliver the polish that both Graphs, as well as the users deserve. As always, thanks to anyone involved which includes everyone who has been providing feedback, reported issues, contributed with code, or helped in any other possible way. And of course especially to Christoph who has been maintaining Graphs with me and is responsible for a large part of the architectural changes that made this release possible.

Go get the new release from Flathub here!

p.s. On a more personal note with a shameless plug, I will be speaking at GUADEC 2026 about my journey into app development, and how to get into this world as an outsider without a CS degree. Be sure to check that out if you are interested in starting with applications, and want to know how it is to join a project in the GNOME ecosystem, it’s a lot less scary than it may sounds 🙂 

I’ll be joining on-site, so say hi to me there if you have any questions or are up for a chat :). Otherwise the whole event will be livestreamed as well, and you can always reach me at sstendahl@gnome.org. 

Allan Day

@aday

GNOME Foundation Update, 2026-05-15

Welcome to another GNOME Foundation update post! Today’s installment covers highlights from what’s happened over the past two weeks.

LAS 2026

Linux Apps Summit 2026 starts tomorrow! The organizing team, which includes members from both GNOME and KDE, has been hard at work and is on the ground in Berlin making final preparations. The schedule looks great, and it promises to be a well-attended event.

The talks are being streamed this year, so make sure to watch our social media for details, and tune in live to hear the talks.

GUADEC 2026

Preparations are continuing for July’s GUADEC. The call for Birds of a Feather sessions is currently open. If you want to hold an informal discussion or working session, please fill out the form before 5th June.

Applications are still open for travel funding for GUADEC. The deadline for submissions is 24th May – that’s just over one week.

Board meeting

This week the Board of Directors had its regular meeting for May. A summary:

  • The Board authorized the closure of a bank account which we are no longer using.
  • I gave an update on operations over the past month, and got feedback from the Board
  • Felipe Borges from the Internship Committee joined, to give a report. The Board discussed how we can best support the committee.
  • Deepa gave a finance report, which included numbers from January and February. The main news here was that our finances are running close to what was projected for this year’s budget.
  • The Board discussed the draft of the report from the audit we recently underwent, as well as the draft of our latest annual tax filing. This is a routine part of the Board’s work, as it is required to perform a review before these documents are finalized.

Office transitions

Our long-running effort to enhance our internal accounting processes has continued over the past two weeks. A notable development has been the retirement of several finance platforms, which have been effectively replaced by the new payments platform that we adopted in January. This platform reduction will reduce operational complexity, as well as workloads. It is still ongoing – we have an additional two more platforms that are currently in the process of being retired.

Another highlight has been the launch of a search for a new member to join our finance and operations team. This is a part-time, contract-based role, which has been shaped in close consultation with Dawn Matlak, who is supporting our finance and accounting operations on a temporary basis, and has already been factored into our budget projections.

We are looking for someone at director level who brings substantial nonprofit finance experience — including audit preparation and compliance experience — which reflects how much the Foundation’s operational and regulatory requirements have grown, particularly in the run up to and following our audit last March, and will provide in-house expertise which will reduce our reliance on external consultants. You can read the full posting here.

Thanks for reading, and see you in two week’s time!

Bart Piotrowski

@barthalion

How does Flathub even work? The CDN and caching layer

There is one specific way in which the non-corporate open source projects typically document how their infrastructure work: not at all, and Flathub is no different. The full picture likely lives only in my brain, and while it could be sorted out by anyone (especially in this LLM age, yay or nay), why should it only be me thinking at night about all the single points of failure?

Like any system that evolved naturally, it's all over the place. It's tempting to tell its history chronologically, but even then, it's difficult to find a good entry point. Instead, this post focuses on what happens when users call flatpak install; later entries will cover the website and, finally, the build infrastructure. Buckle up!

CDN, caching proxies, the master server

The secret of making computers work well is to have them not do anything at all, and that's the story behind serving Flathub's OSTree repository. Content-addressed objects are extremely cacheable as they are immutable, offloading the effort to the CDN provider.

When the client connects to dl.flathub.org, you can be certain it hits some layer of cache. Almost all the heavy-lifting is done by Fastly. At the peak, when both EMEA and North America are awake and at computers, 50 million requests per hour are cache hits served by Fastly's infrastructure, with a modest 20 million being misses passed down to our servers. There would be no Flathub without Fastly; Fastly does it completely for free, not even for fake Internet points as we are incredibly bad at highlighting what our sponsors do for us.

image

image

You can't do enough cache, and so various Fastly servers talk to Fastly-managed shield server which caches the most requested objects to avoid spilling over too much to us. For legit cache misses, the request will be served by one of 8 caching proxies we are running at different VPS providers. We use a consistent hashing director at Fastly which will pick the backend based on the path being requested. In the past, we used a dumb round-robin but as a result, each caching proxy had its own independent copy of the working set, wasting disk space and producing a higher miss rate against the master server. Hashing by URL behaves like one big cache instead of N copies.

These days, the caching proxy fleet consists of 3 servers at Mythic Beasts, 2 servers at AWS, another 2 at NetCup and a single server at DigitalOcean. We don't collect overly detailed metrics, but on average, each proxy serves around 1 TB/month back to Fastly and pulls roughly 5 TB/month from origin. With only 100 GB of disk space per proxy against a multi-TB working set, we're not so much caching the long tail as smoothing it. In the ideal world, we would be retaining much more data at this layer, but it's not the world we live in.

Each of these servers is running the latest stable Debian release. The requests are served by the usual nginx setup with proxy_cache enabled. There is some custom Lua code for invalidating certain paths after publishing new builds finishes (spoilers!). Vanilla nginx doesn't support the PURGE method, and third-party modules like ngx_cache_purge have not seen any maintenance for over 10 years. In the end, it was more maintainable to write Lua code to calculate the caching key of a URL and then run os.remove to "purge" it from the cache.

There's also a systemd timer for refreshing the Fastly IP allowlist. We used to expose these servers publicly, but a vision of everything crumbling down due to a DDoS attack kept me awake at night so this had to change.

On the far end of this setup sits a lonely physical server living in one of the Mythic Beasts' datacenters. This is The Server holding the entire Flathub repo on an equivalent of RAID10 in ZFS world: two 2-disk mirror vdevs on which ZFS stripes data across. There is more nuance to this setup, but the ultimate advantage is that we can tolerate a disk failure in each of the mirrors, while being less taxing to resilver after a swap. The entire reachable data set is around 4TB of data, with the remaining 6TB unused. There will be more about the repository maintenance later on!

Ironically, it's the only server running Ubuntu. At the time, it was the easiest way to have support for ZFS readily available. We could re-provision it to Debian, but on the other hand, what for? It works fine that way. It has survived at least 2 major upgrades between LTS-es; if it ain't broke, don't fix it.

The master server itself has to be partially public as it's where new builds are being uploaded. It no longer exposes the raw Flathub repository for the same reason caching proxies don't. This is accomplished with Tailscale and a lightweight ACL config ensuring caching proxies can talk only to the HTTP server running on the main repo server and vice versa (for issuing PURGE requests). Yes, all involved parties have public IP addresses assigned so this could technically be pure WireGuard setup but I prefer to make this someone else's concern, especially given how generous Tailscale's free plan is.

Flathub CDN topology

It's not much, but it's honest work. For how little we have, the file-serving half of Flathub's infrastructure works unreasonably well. Stay tuned for part 2!

Christian Hergert

@hergertme

Translating French

I have been spending more time learning French lately, and as often happens, that turned into a small side project. liblingua is not intended to be a big mainstream translation platform. It is a fun GLib/GObject library for experimenting with local machine translation from applications.

The library uses Bergamot from Mozilla for the translation backend. Instead of sending text to a web service, liblingua resolves the language pair you ask for, downloads the required language model into the local user cache, and then performs translation locally.

The high-level API is built around a few small objects: LinguaRegistry discovers available translation profiles, LinguaProfile represents a model that can be loaded or downloaded, LinguaProgress reports download progress, and LinguaTranslator performs the translation.

All potentially blocking work is exposed as DexFuture, so it fits naturally into libdex based applications. If you are already using fibers, the code can stay linear and easy to read with dex_await_object().

A Small Example

Here is the basic shape of translating French into English:

#include <liblingua.h>

static void
translate_example (void)
{
  g_autoptr(LinguaProgress) progress = lingua_progress_new ();
  g_autoptr(LinguaRegistry) registry = NULL;
  g_autoptr(LinguaProfile) profile = NULL;
  g_autoptr(LinguaTranslator) translator = NULL;
  g_autoptr(LinguaTranslation) result = NULL;
  g_autoptr(GListModel) profiles = NULL;
  g_autoptr(GError) error = NULL;

  if ((registry = dex_await_object (lingua_registry_new (), &error)) &&
      (profiles = lingua_registry_resolve (registry, "fr", "en")) &&
      (profile = g_list_model_get_item (profiles, 0)) &&
      (translator = dex_await_object (lingua_profile_load (profile, progress), &error)) &&
      (result = dex_await_object (lingua_translator_translate (translator, "Bonjour"), &error)))
    g_print ("%s\n", lingua_translation_get_translation (result));
  else
    g_printerr ("Error: %s\n", error->message);
}

The first time a model is needed, loading the profile may download it. After that, the model is reused from the local cache. That makes liblingua useful for little tools, demos, and desktop experiments where local translation is preferable to wiring everything through a remote service.

In the future this is probably the type of thing we would want as a desktop service to avoid duplicating caches amongst Flatpak applications. It would also be extremely useful to do live translation in Camera and Image Preview apps. I played a bit with that using Tesseract for OCR and it worked better than expected.

Nirbheek Chauhan

@nirbheek

An Esoteric Type of Memory "Leak"

A little while ago, my colleague Sebastian started complaining about OOMs caused by Evolution taking up tens of gigabytes of memory. We discussed using sysprof to debug it, but it was too busy a time for Sebastian to set aside a few hours to do that.

Funnily enough, the most efficient fix at the time was to buy more RAM, since rust-analyzer was also causing OOM issues.

A few weeks went by. Restarting Evolution had become a daily ritual for Sebastian. 

Then, on a whim, I decided investigating this might be a good test for an LLM.

I updated my Evolution git repo, built it, and started up Claude Code in the source root. This was the only prompt I supplied: 

Find memory leaks in Evolution, current sourcedir. Particularly leaks that could accumulate over several hours. A colleague has a leak that slowly accumulates memory usage to several GB over the course of a day, requiring a restart of Evolution. That is the main focus, but we can fix other leaks in the process.

I wish I was lying, but that was all Claude Code needed to find the problem: Evolution just needed to call malloc_trim(0) from time to time.

I refused to believe it at first. I was only convinced when we saw the memory drop after running gdb -p $(pidof evolution) -batch -ex "call malloc_trim(0)" -ex detach

This seems absurd! Doesn't glibc reclaim freed memory from time to time?

Yes, it does. It calls sbrk() to do that. However, sbrk() can only reclaim free memory at the top of the heap, since it simply moves the program break downward to do so. malloc_trim(0) calls sbrk() and then also calls madvise(..., MADV_DONTNEED) on the free pages, which allows the kernel to reclaim them.

So if you have 10GB of unused memory followed by 4 bytes allocated at the top of the heap, your RSS is >10GB, even if you're using a few hundred megs. Till you call malloc_trim(0).

Note that you can only get into this situation if you have hundreds of thousands of small allocs/deallocs happening repeatedly. If your alloc is >128KB, mmap() is used for the allocation, and none of this applies.

Coincidentally, GLib's use of GSlice for GObject allocations was masking this issue in the past, but GSlice has been a no-op for some time now (for good reasons). Ideally, Evolution should not be using GObject for such ephemeral objects.

Lesson learned: if you have memory usage issues and you suspect fragmentation, try malloc_trim(0) before you go thinking about fancy allocators.

Christian Hergert

@hergertme

Limiters in libdex

Libdex now has DexLimiter, a small utility for bounding how much asynchronous work runs at once.

This is useful when a workload can produce more parallelism than the underlying machine, subsystem, or service should actually handle. Common examples include indexing files, downloading URLs, generating thumbnails, parsing documents, or querying a service with a fixed concurrency budget.

The usual API is dex_limiter_run(). It acquires a permit, starts a fiber, and releases the permit when that fiber finishes.

static DexFuture *
load_one_file (gpointer user_data)
{
  GFile *file = user_data;

  return dex_file_load_contents_bytes (file);
}

DexLimiter *limiter = dex_limiter_new (8);
DexFuture *future = dex_limiter_run (limiter,
                                     NULL,
                                     0,
                                     load_one_file,
                                     g_object_ref (file),
                                     g_object_unref);

In this example, no more than eight file loads will run at the same time, regardless of how many files are queued. The returned DexFuture resolves or rejects with the result of the spawned fiber.

One important detail is that dropping the returned future does not cancel a fiber that has already started. Once work has acquired a permit, it is allowed to complete so that the limiter can release the permit cleanly.

For more specialized cases, DexLimiter also supports manual acquire and release:

g_autoptr(GError) error = NULL;

if (dex_await (dex_limiter_acquire (limiter), &amp;error))
  {
    do_limited_work ();
    dex_limiter_release (limiter);
  }

This is useful when the limited section is not naturally represented by a single fiber. However, callers must release exactly once for every successful acquire. In most cases, dex_limiter_run() is preferable because it handles release on both success and failure paths.

The limit should describe the constrained resource, not the number of items being processed. Remote APIs and databases may need a small limit. CPU-heavy work should usually be near the amount of useful worker parallelism. Local I/O can often tolerate a larger value, depending on the storage system. Separate resources should usually have separate limiters, so one workload does not consume another workload’s concurrency budget.

Finally, dex_limiter_close() can be used during shutdown. Once closed, pending and future acquisitions reject with DEX_ERROR_SEMAPHORE_CLOSED. Work that already holds a permit may continue, but releasing after close does not make new permits available.

The goal is to make bounded parallelism simple: queue as much asynchronous work as you need, but only run as much of it as the system should handle.

Toluwaleke Ogundipe

@toluwalekeog

Hello GNOME and GSoC, Again!

I am delighted to announce that I am returning for Google Summer of Code 2026 to contribute to GNOME once again. Following my work on Crosswords last year, I will be shifting focus to the core of the desktop: Mutter. For what it’s worth, I never left; I’ve been working with Jonathan to improve things and add shiny new features in Crosswords.

Mutter serves as the Wayland display server and compositor library for GNOME Shell. Currently, a GPU reset invalidates the EGL context and causes the loss of all allocated GPU memory, resulting in the entire desktop crashing or freezing.

My project aims to implement a robust recovery mechanism for GPU resets to prevent these session-ending freezes, under the mentorship of Jonas Ådahl, Robert Mader, and Carlos Garnacho. Leveraging the GL_EXT_robustness extension, I will implement reset detection, context re-creation, and re-upload of essential GPU resources, such as client textures, glyph caches, and background images. This will allow the compositor to resume rendering seamlessly after hardware-level failures.

Over the course of the project, I will share updates on the progress of these recovery mechanisms and the challenges of managing state restoration within the compositor.

I am very grateful to my mentors, Jonas, Robert, and Carlos, for the opportunity to work on this critical part of the GNOME ecosystem. Also, a big shout-out to Federico, Hans Petter, and Jonathan for their continuous support. I look forward to another productive summer with the community. 🦾❤

Nick Richards

@nedrichards

Agile Rates After Launch

Last summer I wrote up Octopus Agile Prices For Linux, a small GTK app to show the current Octopus Agile electricity price and the next day of half-hourly rates. It did one thing, which is a good number of things for a desktop utility to do.

Since then the app has become a bit less narrow. But it now does enough more that the launch post undersells it, and in a couple of places sends people looking for the wrong name.

The app is now called Agile Rates. The application ID is still com.nedrichards.octopusagile, because changing stable app IDs is not exciting for anyone, but the name changed because Agile is no longer the whole story. Thanks to code from Andy Piper, it can also work with Octopus Go and Intelligent Go tariffs. Intelligent Go needs an API key because those prices are account-specific, but plain Agile and Go can still be set up manually.

That was the first larger change: setup had to become a thing.

The original app assumed you knew your tariff and region, or at least were willing to rummage in preferences until the graph stopped being wrong. That is fine for a scratch-your-own-itch project and a bit rude for an app on Flathub. The current version opens with a setup assistant. You can connect an Octopus account with an API key and account number, in which case the app tries to detect the active electricity tariff. Or you can keep it simple and choose the tariff and region manually.

The second change is the one I actually use most: finding the cheapest slot.

The launch version showed a graph and left the planning to the human. That works for quick glances, but most of my real questions are more specific:

When should the dishwasher run?
When should the washing machine run?
Is there a cheap three-hour block before tomorrow afternoon?

So there is now a “find cheapest time” tool. Pick a duration and it searches the available forecast window for the cheapest continuous block. The chart now scrolls to the chosen time instead of making you squint along the bars like you are reading a very dull railway timetable.

The graph itself has had a lot of tweaks. It has grid lines, clearer day boundaries, better current-price highlighting, less terrible dark-mode contrast, and layout rules that behave on narrower screens. The preferences window and main window are adaptive now too. Handy if you split your screen or have a Linux phone.

The biggest recent addition is usage history. If you connect an account, the app can fetch recent smart meter consumption data, cache it locally, and show a Usage view. That includes kWh history, a seven-day trend, an estimated monthly usage figure, and charts. It also tries to estimate spend by matching historical usage to tariff rates and standing charges.

Underneath that, the project has become more like a real small application. There are unit tests for pricing, tariff selection, adaptive layout, usage insights, and historical cost calculation. The development Flatpak manifest runs the Meson tests inside the GNOME SDK, which catches the class of bugs where the host Python environment was accidentally being too kind. Ruff is in the loop for linting. The app moved to the GNOME 50 runtime. Screenshots, AppStream metadata, branding colours, and icons have all been tidied up.

So the current description is: Agile Rates is a small GNOME app for UK Octopus Energy customers who want current and upcoming smart tariff rates, a cheap-time finder, and, if they connect their account, recent usage and estimated spend history. It is independent and is not affiliated with, endorsed by, or sponsored by Octopus Energy. I hope you find it useful.

Michael Catanzaro

@mcatanzaro

Flatpak Sandbox Escape via Yelp

Yelp 49.1 fixes a significant Flatpak sandbox escape related to last year’s CVE-2025-3155. CVE assignment for this new issue is currently pending.

This is not a bug in Flatpak. Flatpak allows sandboxed applications to open URIs or files, meaning the sandboxed application may use a URI or file path to launch another application to open the URI or file. This is brokered via the OpenURI portal. The portal or the app may decide to require user interaction to decide which app to launch, but user interaction is generally not required. This is necessary: you would get pretty frustrated if you were prompted to select which app to use every time you click on a link or try to open something! Accordingly, unsandboxed applications that are installed on the host system are somewhat risky: any malicious sandboxed app may launch an unsandboxed app using a malicious file, generally with no user interaction required. Unsandboxed applications installed on the host OS are inherently part of the attack surface of the Flatpak sandbox.

In this case, a sandboxed application may launch Yelp to open a malicious help file. The help file can then exfiltrate arbitrary files from your host OS to a web server by using a CSS stylesheet embedded in an SVG. Suffice to say the attack is pretty clever, and certainly more impactful than the typical boring memory safety bugs I more commonly see.

This bug was discovered by Codean Labs, which performed a security audit of Flatpak and several GNOME projects thanks to generous sponsorship by the Sovereign Tech Resilience program of Germany’s Sovereign Tech Agency.

Computers Are Terrible

A slightly more collected version of originally 18 Signal messages. This is a simplification. I am evidently no expert in Unicode specifically or text encoding in general.

I, for a long time, believed that while many modern standards are a mess of legacy compatibility built on legacy compatibility, Unicode was an exception. That the only compromise it made was ASCII-compatibility, but even that wasn’t such a big one given that its character set is the most common one in computing even to this day. I was wrong.

I got a US keyboard so now I have 2 different ways of typing accented characters. I can either hold the A key until I get an option of à, á, â, ä, ǎ, etc., or I can press  E and then A to get to á, combining ´ and a regular a. I started wondering… when typing it one way or the other, the results must be different, right? I looked for a website that showed me what code points I was typing, and… they were the same?

Most systems (the OS/browser in this case) normalize all text either one way or the other. In this case, to a single code point. Unicode does have deprecation, so you would think that when they introduced combining characters, they would have deprecated the precomposed versions of characters that can be written using them, right? Nope!

It’s arbitrary which way each system normalizes text. Some do it composed (á) and some decomposed (a + ◌́). Both are part of the standard. And of course, you need to treat them as equivalent when not normalized so you might as well do it when you can anyway.

Precomposed characters are the legacy solution for representing many special letters in various character sets. In Unicode, they were included for compatibility with early encoding systems […].

From Precomposed character - Wikipedia

Oh well, my day is ruined. My new life goal is advocacy for the deprecation of all precomposed characters… or maybe I should just accept that all computing will be plagued by backwards compatibility headaches ’til the end of time.

Jakub Steiner

@jimmac

USS/FMS Carrier

I'm a sucker for pixel art and very constrained music grooveboxes. While I'm not into chiptunes, they sure are a cultural phenomenon.

You heard me boast about the Dirtywave M8 numerous times, even in person, because it's my tool of choice for producing and performing music. Its genius lies in high sound quality and a workflow that grew out of the tiny screen and button constraints on the Nintendo Gameboy, the platform of choice for an app called LSDJ, which the M8 is modelled after. That, and the sheer amount of sound engines living in your pocket. Building on the shoulders of giants and all.

The small M8 community has a few 'celebrities', such as Ess Mattisson. I first heard of Ess when I ran into an amazing single channel track called Wertstoffe. Ess has a great pedigree as the creator of the original Digitone FM synthesizer while working at Elektron. FM remains his forte, and after creating numerous plugins through Fors, he has now released a little 2-operator FM synth and sequencer for the platform of the future, Nintendo Gameboy Advance.

Lo-bit Club logo animation FMS synth running on Gameboy Advance

What makes FMS a bit crazy is what it's doing under the hood. The Gameboy Advance has no FM synthesis hardware at all. Its audio gives you two Direct Sound DMA channels of 8-bit signed PCM — that's 256 amplitude levels, roughly 48 dB of dynamic range. For comparison, a CD has 96 dB, in much finer fidelity. The CPU is an ARM7TDMI running at 16.78 MHz with 256 KB of RAM, and that's where all the FM math happens. Sine waves, modulation, mixing four channels, all in real time, in software, on a chip from 2001 that was designed to shuffle sprites around. The hiss you hear is just part of the deal: quantization noise from that 8-bit DAC. So few amplitude steps means everything that comes out has this fuzzy, slightly crushed quality. You can't get rid of it. It is the sound. And somehow there are four channels of 2-operator FM synthesis in there, each with envelopes and ratio control. On a Gameboy Advance.

Picking GBA as a platform of choice in 2026 may be strange. Surprisingly, it can be used on a very large array of hardware. Not only can you plug a memory card into the original hardware or new fancy clones like the Analogue Pocket, you have an exponentially larger choice of dozens if not hundreds of Chinese emulator handhelds from Anbernic, Powkiddy, Miyoo or Retroid. You can also use the Steam Deck or any PC running one of the many emulators, RetroArch being the most popular one.

FMS really touched me. Partly because I have a soft spot for the Nordic demo scene, but mainly for its novel approach to composition. Just like with the M8, creating basic building blocks and then applying transposition to break the looping monotony is my favorite workflow. This little thing has that in the form of pattern and trig transposition but also a novel take on "effects". Yes, you heard me right. There's a sorta-kinda-delay. Even does stereo field ping-pong.

I will keep on trying to create something that … sounds good. The process has been amazing. I truly love some of the sequencing tricks and workflows. The sequencer is, however, so good it would be worth seeing it run on top of a higher quality sound engine too.

This Week in GNOME

@thisweek

#248 Tracking Performance

Update on what happened across the GNOME project in the week from May 01 to May 08.

GNOME Core Apps and Libraries

Glycin

Sandboxed and extendable image loading and editing.

Sophie (she/her) says

Automatically running tests on GitLab has now been a standard for a while. But tracking performance metrics is much less common. Glycin now started running basic performance tests on bencher.dev’s bare metal runners, which will hopefully provide comparable results.

As of now, the benchmarks are only covering the overhead of the loader stack, by loading a 1px PNG, and the binary file sizes for glycin loaders and the thumbnailer. But the tests should be easy to expand. The benchmarks are always run for commits in the main branch, and can be manually started for merge requests. This way it will be possible to track performance improvements and catch regressions early.

Third Party Projects

Christian says

🎉 Gitte 0.2.0 is out!

This week, Gitte 0.2.0 was released with a big focus on interactive rebasing and polishing everyday Git workflows.

The biggest addition is interactive rebasing directly from the commit log. Commits can now be reordered via drag & drop, dropped, reworded, edited during a paused rebase, or squashed and fixuped without leaving the GUI.

Remote operations like push, pull, fetch and clone now use the Git CLI internally, improving credential handling and protocol support. The diff view font is now configurable, and repositories can be opened directly from the terminal using commands like gitte ~/Code/projects/Gitte.

This release also adds a unified stash dialog for workflows that require stashing changes, ahead/behind indicators for the current branch, double-click checkout for local branches, and improved merge commit information in the log viewer. There are also a few small easter eggs hidden throughout the app.

On the translation side, Gitte now includes a German translation and a Ukrainian translation by Dymko. The release also includes AUR packaging documentation contributed by Kainoa Kanter, alongside many bug fixes and smaller refinements across the application.

Get it on Flathub or check the source code.

Bilal Elmoussaoui reports

I have released the first version of gobject-linter, previously known as goblint.

This release brings a lot of new functionality: Meson integration for accurate dead code detection (functions, enum variants, structs, struct fields and more) via the new dead_code rule, mis-exported public types detection, inconsistent function signatures checking, and a type_style rule to enforce consistent use of either GLib type aliases (gint, gfloat, gdouble) or their C equivalents across your codebase. Two new GObject introspection rules for verifying missing since annotations and the exported public APIs are bindings friendly.

It also supports diff-scoped linting via --diff - so you can incrementally integrate it into large existing projects.

The release is also available on crates.io

Jeffry Samuel announces

Nocturne 1.0.0 has been released!

Nocturne is a modern music player that can play songs from your OpenSubsonic, Jellyfin and local libraries.

It includes features such as audio visualizers, equalizers and automatic lyric fetching.

Some of the new features in 1.0.0 are:

  • Support for changing max bitrate
  • Support for replay gain
  • Added option to show sidebar player
  • Compatibility with word for word lyrics
  • Faster and more stable interface
  • Gapless playback
  • Grouping of songs in albums by their disc
  • Added option to show dynamic background in the main window
  • Much more

mas says

Hi, finally released my first app, Press! With has a very straight-forward interface to compress huge music libraries with ease.

You might like it because:

  • Compresses multiple files simultaneously
  • Never takes destructive actions on the source (but it can replace files on the destination if you want)
  • Avoids re-compressing a file (if you just want to add a new album, it compresses just that one, not your entire library)
  • Import basically any format GStreamer can take!
  • Export to mp3, m4a, or ogg
  • Move other non-auto files with you
  • You can add custom formats with a bit of GStreamer know-how

It really is a one-stop solution to compress music to portable devices.

I’d love to hear feedback and suggestions.

Get it on Flathub or check the source code. Oh and, it uses libadwaita, vala, and GStreamer.

JumpLink announces

The type-definitions generator ts-for-gir produces the typings used to write GNOME applications in TypeScript. It can now experimentally run directly on GJS, without Node.js.

This is made possible by the new experimental GJSify framework, which provides Node.js and Web APIs on top of GJS. Its long-term goal is to make as much of the JavaScript / TypeScript ecosystem as possible available to GJS applications.

bhack announces

I’d like to introduce Mini EQ, a new small GTK/Libadwaita app for PipeWire desktops.

Mini EQ is a system-wide parametric equalizer. It creates a PipeWire filter-chain sink with builtin biquad filters, routes desktop playback through it with WirePlumber, and provides a compact 10-band fader workflow. It also supports Equalizer APO/AutoEq preset import and an optional spectrum analyzer through the PipeWire JACK compatibility layer.

The project is now available on Flathub, with source and packaging published on GitHub.

Flathub: https://flathub.org/apps/io.github.bhack.mini-eq GNOME Shell extension: https://extensions.gnome.org/extension/9803/mini-eq-controls/

Source: https://github.com/bhack/mini-eq

Anton Isaiev announces

RustConn is a GTK4/libadwaita connection manager for SSH, RDP, VNC, SPICE, Telnet, MOSH, and more.

Versions 0.12.8–0.13.7 were shaped heavily by user feedback. What started as a personal tool is now used daily by sysadmins and DevOps teams — and their reports drive the roadmap.

Key additions:

Local Shell in Flatpak — fully working host shell via flatpak-spawn with real PTY and job control. RDP dynamic resize — in-place resolution change via Display Control Channel, no reconnect needed; automatic fallback for legacy servers. RDP Autotype — type text as keystrokes into remote sessions, bypassing clipboard restrictions. Drag & Drop — file paths into terminals, files to RDP clipboard. Smart Folders & Dynamic Folders — filter connections by tag/protocol/pattern, or generate them from external scripts. Virt-viewer .vv file support — open SPICE/VNC files from Proxmox, oVirt, libvirt directly. CLI —format json|csv|table — machine-readable output for scripting and AI agents. GNOME HIG audit — restructured menus, unified dialogs, accessible labels across all windows. Flatpak CLI auto-versioning — 7 bundled CLI tools now resolve latest versions from upstream automatically.

Homepage: https://github.com/totoshko88/RustConn Flathub: https://flathub.org/en/apps/io.github.totoshko88.RustConn

Shell Extensions

Miklós Zsitva says

Matrix Status Monitor v7 improves room handling, notifications, and profile actions in GNOME Shell.

Matrix Status Monitor v7 is now available on GNOME Extensions, bringing a noticeably smoother experience for Matrix users running GNOME Shell. This release focuses on making the extension feel more responsive and more native to the desktop, while keeping the panel UI lightweight and fast.

The biggest change is the new weight-based room sorting system, which replaces the old timestamp-only approach. Rooms are now ranked by highlights, unread counts, direct messages, favourites, visit frequency, and recency, so the most relevant conversations surface first.

v7 also adds a clear idle/active separator in the room list, plus async menu rebuilds via GLib.idle_add to avoid blocking the UI during updates. On top of that, the extension now sends GNOME desktop notifications through MessageTray, with event ID deduplication so the same message does not trigger repeated alerts.

The profile header has been expanded as well: it now shows the user avatar, display name, user ID, plus one-click copy and QR toggle actions. The avatar loading path was also extended to handle a larger profile icon size, which helps the header feel more polished and distinct from room rows.

Overall, v7 is a refinement release that makes the extension feel more reliable, more readable, and more useful in daily GNOME use.

https://extensions.gnome.org/extension/9328/matrix-status-monitor/

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Richard Hughes

@hughsie

LVFS Sponsorship Announcement

Some great news: I’m pleased to announce that both Dell and Lenovo have agreed to be premier sponsors for the Linux Vendor Firmware Service (LVFS) as part of our new sustainability effort.

Over 145 million firmware updates have been deployed now, from over a hundred different vendors to millions of different Linux devices.
With the huge industry support from Lenovo and Dell (and our existing sponsors of Framework, OSFF, and of course both the Linux Foundation and Red Hat) we can build this ecosystem stronger and higher than before; we can continue the great work we’ve done long into the future.

Steven Deobald

@steven

Apologies

I believe accountability can be a challenge in a nonprofit, which only makes it all the more important. In this post, I am holding myself accountable. For the avoidance of doubt, nothing that follows has anything to do with my exit from the GNOME Foundation last August.

I owe a few folks some apologies from my time as Executive Director. I have apologized to most of them individually already, where I could. But I believe that public accountability is the antidote to public frustration and I hope this contributes, in a small way, to the GNOME community moving forward.

First off, I sincerely apologize to Jehan Pagès and Christian Hergert. I was curt with both of you last summer and neither of you deserved it. From July 23rd to August 29th I was dealing with significant sleep deprivation but that’s no excuse for the way I spoke to either of you. I’m sorry.

Next, I apologize to the former Executive Directors and active community members who raised concerns to me. Holly, you warned me. Twice. Many other people tried to share their perspectives. I was too focused on the Foundation’s financial situation, and I did not take the time to fully understand what I was hearing from you all. I regret that.

 

Sonny

To Sonny Piers: I am sorry. I had a long call with you last June. You told me your complicated story. You seemed hurt — but I didn’t believe you. My understanding was incomplete and I did not approach the situation with the care it deserved.

I’m sorry I didn’t do more to support you.

 

Tobias

More than anyone, I want to apologize to Tobias Bernard. Tobias, I am sorry. You gave me many hours of your time, patience, and thoughtfulness. You shared your ideas openly and in good faith, and I didn’t always meet that with the same level of openness.

In particular, when we discussed Sonny’s situation, I did not listen as carefully as I should have. I was too focused on my existing understanding, and I failed to engage with what you were trying to convey. You deserved better from me.

Sonny is lucky to have a friend like you.

 

Meta

This post reflects only my personal experiences and perspectives. It is not intended to make allegations or factual claims about the conduct of any individual or organization.

Until Microsoft goes out of business, a permanent copy of this apology can be found in this gist.

 

Nick Richards

@nedrichards

WhatCable, Framework, and USB-C

USB-C is excellent, provided you don’t look too closely.

I’ve been seeing a drum beat of interest in the internals of USB-C. Darryl Morley’s macOS WhatCable, Chromebooks exposing lots of lovely info about emarkers, USB cable testers and a bit more. Very infrastructure club topics. So I made a small GTK app also called WhatCable which is intended to show what Linux knows about your USB ports, cables, chargers and devices, but written as a GNOME/libadwaita app and using the interfaces Linux exposes through sysfs.

The hope was fairly straightforward: plug things into my Framework 13, ask Linux what is going on, and present the answer in a way that doesn’t require remembering which bit of /sys to poke. In particular I wanted cable identity and e-marker details. These are the useful little facts that tell you whether a cable is what it claims to be, or at least what it claims to be electronically. Given the number of USB-C cables in the house whose origin story is “came in a box with something”, this felt like a public service, or at least a satisfying evening.

The first bit is pleasantly sensible. Linux has standard-ish places for this information:

/sys/bus/usb/devices
/sys/class/typec
/sys/class/usb_power_delivery
/sys/bus/thunderbolt/devices

When those are populated, a normal unprivileged app can learn quite a lot. It can show USB devices, Type-C ports, partners, cables, roles, power data, Thunderbolt and USB4 domains. That’s exactly the sort of thing a small Flatpak app should be good at: read some public kernel state, translate it into something at least moderately human friendly and then depart.

On my Framework 13, the USB device and Thunderbolt sides were useful. The Type-C side was not. /sys/class/typec existed but had no ports. /sys/class/usb_power_delivery existed but was empty. This is a slightly annoying result, because it means the nice standard API is present as a signpost rather than a destination.

The next clue was that the machine clearly does have USB-C machinery, and not just because I could look at the side of the device. It is a Framework 13 with the embedded controller and Cypress CCG power delivery controllers doing real work. The relevant kernel modules were loaded, including UCSI and Chrome EC pieces. There was also an ACPI UCSI device at:

/sys/bus/acpi/devices/USBC000:00

but ucsi_acpi did not appear to bind to it and create the Type-C class ports. So the hardware and firmware know things, but they were not arriving in the standard Linux userspace shape.

Framework’s own tooling gives another route in. I built framework_tool from FrameworkComputer/framework-system and asked the EC what it could see. The Framework-specific PD port command did not work on this firmware:

USB-C Port 0:
[ERROR] EC Response Code: InvalidCommand

and similarly for the other ports. That’s not very poetic, but it is at least clear.

The Chromebook-style power command was more useful. With a charger connected it reported, for example:

USB-C Port 0 (Right Back):
  Role:          Sink
  Charging Type: PD
  Voltage Now:   19.776 V, Max: 20.0 V
  Current Lim:   2250 mA, Max: 2250 mA
  Dual Role:     Charger
  Max Power:     45.0 W

That’s good information. It’s not cable identity, but it is the kind of port state people actually want when they are trying to work out why a laptop is charging slowly, or not charging, or doing something else mildly USB-C shaped.

framework_tool --pd-info could also talk through the EC to the Cypress controllers and report their firmware details:

Right / Ports 01
  Silicon ID:     0x2100
  Mode:           MainFw
  Ports Enabled:  0, 1
  FW2 (Main)   Version: Base: 3.4.0.A10,  App: 3.8.00
Left / Ports 23
  Silicon ID:     0x2100
  Mode:           MainFw
  Ports Enabled:  0, 1
  FW2 (Main)   Version: Base: 3.4.0.A10,  App: 3.8.00

Again, useful. Again, not the cable.

Much of this investigation and app code was written with AI tools in the loop. That was useful for chasing down boring plumbing and generating probes. The decisive test was asking the Chrome EC for the newer Type-C discovery data directly. The EC advertised USB PD support, but not the newer Type-C command set. EC_CMD_TYPEC_STATUS and EC_CMD_TYPEC_DISCOVERY both came back as invalid commands on all four ports.

That means that on this Framework 13 firmware path I cannot get Discover Identity results, SOP/SOP’ discovery data, SVIDs, mode lists or e-marker details through Chrome EC host commands. The cable may well be telling the PD controller interesting things, but those things are not exposed through a stable unprivileged interface I can sensibly use in a desktop app.

This is the main lesson from the whole exercise: USB-C inspection on Linux is not one API. It is a set of possible stories. Sometimes the kernel Type-C class tells you lots of things. Sometimes Thunderbolt sysfs tells you a different useful slice. Sometimes a vendor EC can tell you power state, but only as root. Sometimes the information exists below you somewhere, but not in a form you should build an app around.

So WhatCable needs to be honest. It should show the sources it can read, and it should say when a source is unavailable rather than pretending absence means certainty. “No cable identity exposed on this machine” is a very different statement from “this cable has no identity”. The former is boring but true. The latter is how you end up lying with an icon (it is not a nice icon).

The current shape I think is right is:

  • use USB, Type-C, USB PD and Thunderbolt sysfs whenever they are available;
  • show raw values as well as friendly summaries;
  • explain missing sources in diagnostics;
  • treat Framework EC data as an optional extra, not a default dependency;
  • if EC access is added, put it behind a narrow read-only helper rather than teaching a Flatpak app to fling arbitrary commands at /dev/cros_ec.

That last point matters. On the host /dev/cros_ec exists, but it is root-only. Making a normal app require broad device access would be a poor bargain. A small privileged helper that answers a few known-safe questions might be acceptable. A graphical app with arbitrary EC command execution would be exciting in the wrong way.

This is not quite the result I wanted when I started. I wanted to show a friendly “this is a 100W e-marked cable” label and feel very clever about it. What I have instead is a more modest app and a better understanding of where the bodies are buried. That’s still useful. A tool that tells you what your machine actually exposes is better than one that implies the USB-C universe is more orderly than it is. Given this, I’m not going to be sharing this one more widely, but fork away if you wish, or come back with a better idea.

It’s very easy to run with GNOME Builder, so just check out the source and ‘press play’ or get an artifact out of the Github Actions. If you run WhatCable on a different laptop and see rich Type-C data, lovely. If you run it on a Framework 13 like mine and mostly see USB devices, Thunderbolt controllers and a note that Type-C data is missing, that is also information. Not as glamorous as catching a suspicious cable in the act, but much more likely to be true.

SELinux MCS challenges with GitLab Runners

Table of Contents

Introduction

GNOME’s GitLab runners use Podman as the container runtime with SELinux in Enforcing mode on Fedora. The GitLab Runner Docker/Podman executor spawns multiple containers per job: a helper container that clones the repository and handles artifacts, and a build container that runs the actual CI script. Both containers need to share a /builds volume — and this is where SELinux’s Multi-Category Security (MCS) becomes a problem.

The MCS problem

An SELinux label has four fields: user:role:type:level. For containers the interesting part is the level, also called the MCS field. A level looks like s0:c123,c456s0 is the sensitivity (always s0 in targeted policy), and c123,c456 are the categories. A process or file can carry up to two categories.

MCS access is based on dominance. A subject’s label dominates an object’s label if the subject’s categories are a superset of (or equal to) the object’s categories:

Subject Object Access? Why
s0:c100,c200 s0:c100,c200 Yes Exact match
s0:c100,c200 s0:c100 Yes Subject’s categories are a superset
s0:c100,c200 s0:c100,c300 No Subject lacks c300
s0:c0.c1023 s0:c100,c200 Yes Full range dominates everything
s0 s0:c100,c200 No No categories can’t dominate any
s0 s0 Yes Both have no categories

How this applies to the runners:

  • Container A runs as container_t:s0:c100,c100 — it can only access objects labeled s0:c100,c100 (or s0:c100, or s0)
  • Container B runs as container_t:s0:c200,c200 — it can only access objects labeled s0:c200,c200 (or s0:c200, or s0)
  • Container A cannot access Container B’s files — c100,c100 doesn’t dominate c200,c200
  • Overlay layers labeled s0 (no categories) — accessible by all containers since every category set dominates the empty set
  • Podman at container_runtime_t:s0-s0:c0.c1023 — the full range means it dominates every possible category combination, so it can manage all containers

The range syntax (s0-s0:c0.c1023) is used for processes that need to operate across multiple levels. It means “my low clearance is s0 and my high clearance is s0:c0.c1023.” The process can read objects at any level within that range and create objects at any level within it. This is why Podman needs the full range — it creates containers with different MCS labels and needs to access all of them.

When Podman starts a container, it picks a random pair of categories (e.g., s0:c512,c768) from within its allowed range and assigns that as the container’s process label. Files created by the container inherit that label. Another container gets a different random pair (e.g., s0:c33,c901). Since c512,c768 and c33,c901 do not match — neither is a superset of the other — SELinux denies cross-container file access. This is the isolation mechanism, and the root cause of the problem with GitLab Runner’s multi-container-per-job architecture.

The helper container gets one random MCS pair, writes the cloned repo to /builds labeled with that pair, and the build container gets a different pair. The build container cannot read or write those files. The :Z volume flag (exclusive relabel) relabels the volume to the mounting container’s category, but that only helps the first container — the second one still has a different label.

The test script

I wrote a script that demonstrates the problem with both standard containers (crun) and microVMs (libkrun). The script creates two containers per test — a helper that writes a file to a shared /builds volume, and a build container that tries to read it — simulating the GitLab Runner workflow:

#!/bin/bash
# Description: SELinux MCS Diagnostic (crun vs krun)

if [ "$(getenforce)" != "Enforcing" ]; then
 echo "WARNING: SELinux is not in Enforcing mode. This test requires Enforcing mode."
 exit 1
fi

TEST_BASE="/tmp/gitlab-runner-mcs-test"
CRUN_DIR="$TEST_BASE/crun-builds"
KRUN_DIR="$TEST_BASE/krun-builds"

# Cleanup from previous runs
rm -rf "$TEST_BASE"
mkdir -p "$CRUN_DIR" "$KRUN_DIR"

echo "======================================================="
echo " TEST 1: Standard Container Isolation (crun)"
echo "======================================================="

# 1. CREATE Helper
podman create --name crun-helper -v "$CRUN_DIR:/builds:Z" fedora bash -c "
 echo '[crun] -> Helper Process Context (Inside):'
 cat /proc/self/attr/current
 echo 'crun-data' > /builds/artifact.txt
 echo '[crun] -> File Label INSIDE Helper:'
 ls -Z /builds/artifact.txt
" > /dev/null

echo "[crun] Starting Helper Container (applying :Z relabel)..."
HELPER_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-helper)
echo "[crun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_CRUN"
podman start -a crun-helper

echo ""
echo "[crun] -> File Label ON HOST (Notice the specific MCS category):"
ls -Z "$CRUN_DIR/artifact.txt"

# 2. CREATE Build Container (The Victim)
podman create --name crun-build -v "$CRUN_DIR:/builds" fedora bash -c "
 echo ' [Build-Internal] Process Context:'
 cat /proc/self/attr/current 2>/dev/null
 echo ' [Build-Internal] Executing ls -laZ /builds :'
 ls -laZ /builds 2>&1 | sed 's/^/ /'
 echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
 cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
" > /dev/null

echo ""
echo "[crun] Starting Build Container to inspect shared volume..."
BUILD_HOST_LABEL_CRUN=$(podman inspect -f '{{.ProcessLabel}}' crun-build)
echo "[crun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_CRUN"
podman start -a crun-build

podman rm -f crun-helper crun-build > /dev/null


echo ""
echo "======================================================="
echo " TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED"
echo "======================================================="

# --- Write the execution scripts to the host to avoid parsing errors ---
cat << 'EOF' > "$TEST_BASE/krun_helper.sh"
#!/bin/bash
echo '[krun] -> Helper Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo 'krun-data' > /builds/artifact.txt
echo '[krun] -> File Label INSIDE Helper VM (Blindspot):'
ls -laZ /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF

cat << 'EOF' > "$TEST_BASE/krun_build.sh"
#!/bin/bash
echo ' [Build-Internal] Process Context (Inside VM):'
cat /proc/self/attr/current 2>/dev/null || echo ' (SELinux disabled/unavailable in guest kernel)'
echo ' [Build-Internal] Executing ls -laZ /builds :'
ls -laZ /builds 2>&1 | sed 's/^/ /'
echo ' [Build-Internal] Executing cat /builds/artifact.txt :'
cat /builds/artifact.txt 2>&1 | sed 's/^/ /'
EOF

chmod +x "$TEST_BASE/krun_helper.sh" "$TEST_BASE/krun_build.sh"
# ---------------------------------------------------------------------

# 1. CREATE Helper MicroVM
podman create --name krun-helper --runtime krun --memory=1024m \
 -v "$KRUN_DIR:/builds:Z" \
 -v "$TEST_BASE/krun_helper.sh:/script.sh:ro,Z" \
 fedora /script.sh > /dev/null

echo "[krun] Starting Helper MicroVM (applying :Z relabel)..."
HELPER_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-helper)
echo "[krun] -> HOST METADATA: Podman assigned process label: $HELPER_HOST_LABEL_KRUN"
podman start -a krun-helper

echo ""
echo "[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):"
ls -Z "$KRUN_DIR/artifact.txt"

# 2. CREATE Build MicroVM (The Victim)
podman create --name krun-build --runtime krun --memory=1024m \
 -v "$KRUN_DIR:/builds" \
 -v "$TEST_BASE/krun_build.sh:/script.sh:ro,Z" \
 fedora /script.sh > /dev/null

echo ""
echo "[krun] Starting Build MicroVM to inspect shared volume..."
BUILD_HOST_LABEL_KRUN=$(podman inspect -f '{{.ProcessLabel}}' krun-build)
echo "[krun] -> HOST METADATA: Podman assigned process label: $BUILD_HOST_LABEL_KRUN"
echo " *** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***"
podman start -a krun-build

# Cleanup
podman rm -f krun-helper krun-build > /dev/null

echo ""
echo "======================================================="
echo " Test Complete."

Test 1 (crun) creates a helper container that mounts the builds directory with :Z (exclusive relabel) and writes artifact.txt. Podman assigns it a random MCS label — in this run it was s0:c20,c540. The file on disk inherits that label. Then a second container (the build container) mounts the same path without :Z and gets a different random label (s0:c46,c331). Since c46,c331 does not dominate c20,c540, the build container is denied access to the file.

Test 2 (krun) runs the same scenario but with --runtime krun, which boots each container inside a lightweight microVM via libkrun. The helper VM gets container_kvm_t:s0:c823,c999 and the build VM gets container_kvm_t:s0:c309,c405 — same MCS mismatch, same denial. The type changes from container_t to container_kvm_t, but the MCS mechanism is identical. On the host side, virtiofsd — the daemon that serves the volume into the VM via virtio-fs — runs under the MCS label Podman assigned to the VM. The build VM’s virtiofsd is trapped in s0:c309,c405 and cannot access files labeled s0:c823,c999.

An interesting detail: inside the libkrun VMs, cat /proc/self/attr/current returns just kernel — SELinux is not available in the guest. The VM thinks it has no mandatory access control, but the host-side virtiofsd is still fully subject to MCS enforcement. This is a blindspot worth being aware of.

The output from a run on Fedora with SELinux Enforcing and Podman 5.8.2:

=======================================================
TEST 1: Standard Container Isolation (crun)
=======================================================
[crun] Starting Helper Container (applying :Z relabel)...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c20,c540
[crun] -> Helper Process Context (Inside):
system_u:system_r:container_t:s0:c20,c540 [crun] -> File Label INSIDE Helper:
system_u:object_r:container_file_t:s0:c20,c540 /builds/artifact.txt
[crun] -> File Label ON HOST (Notice the specific MCS category):
system_u:object_r:container_file_t:s0:c20,c540 /tmp/gitlab-runner-mcs-test/crun-builds/artifact.txt
[crun] Starting Build Container to inspect shared volume...
[crun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_t:s0:c46,c331
*** COMPARE THE cXXX,cYYY ABOVE TO THE FILE LABEL. THIS MISMATCH CAUSES THE DENIAL ***
[Build-Internal] Process Context:
system_u:system_r:container_t:s0:c46,c331 [Build-Internal] Executing ls -laZ /builds :
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
TEST 2: MicroVM Isolation (libkrun / virtio-fs) FIXED
=======================================================
[krun] Starting Helper MicroVM (applying :Z relabel)...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c823,c999
[krun] -> Helper Process Context (Inside VM):
kernel [krun] -> File Label INSIDE Helper VM (Blindspot):
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c823,c999 10 May 2 2026 /builds/artifact.txt
[krun] -> File Label ON HOST (Podman applied the helper's MCS category via :Z):
system_u:object_r:container_file_t:s0:c823,c999 /tmp/gitlab-runner-mcs-test/krun-builds/artifact.txt
[krun] Starting Build MicroVM to inspect shared volume...
[krun] -> HOST METADATA: Podman assigned process label: system_u:system_r:container_kvm_t:s0:c309,c405
*** THE virtiofsd DAEMON ON THE HOST IS TRAPPED IN THIS CONTEXT ***
[Build-Internal] Process Context (Inside VM):
kernel [Build-Internal] Executing ls -laZ /builds :
ls: /builds: Permission denied
ls: cannot open directory '/builds': Permission denied
[Build-Internal] Executing cat /builds/artifact.txt :
cat: /builds/artifact.txt: Permission denied
=======================================================
Test Complete.

GitLab’s official suggestion and why it falls short

GitLab’s documentation on configuring SELinux MCS suggests applying the same MCS label to all containers launched by a runner:

[[runners]]
 [runners.docker]
 security_opt = ["label=level:s0:c1000,c1000"]

This works — all containers get the same category pair, so the helper and build containers can share files. But it collapses MCS isolation between all concurrent jobs on that runner. With concurrent = 4, four simultaneous jobs all run as s0:c1000,c1000 and can read each other’s /builds content — cloned source code, build artifacts, cached dependencies. On a shared or multi-tenant runner, this is a security regression: it trades MCS isolation for functionality.

For runners with concurrent = 1 or dedicated single-tenant runners this is an acceptable tradeoff, but it does not generalize to shared infrastructure where multiple untrusted projects run side by side.

How GNOME currently handles this

GNOME’s runners are managed via an Ansible role that enforces SELinux in Enforcing mode, installs rootless Podman running as a dedicated podman system user with linger enabled, and deploys custom SELinux policy modules. The Podman service runs under SELinuxContext=system_u:system_r:container_runtime_t:s0-s0:c0.c1023 via a systemd override — the full MCS range (s0-s0:c0.c1023) gives the container runtime the ability to spawn containers at any MCS level and relabel volumes accordingly, as explained in the dominance rules above.

Four custom SELinux .te modules are compiled and loaded on every runner host: pydocuum (allows the image cleanup daemon to talk to the Podman socket), podman (grants user_namespace create and /dev/null mapping), flatpak (permits the filesystem mounts flatpak builds need), and gnome_runner (covers binfmt_misc access, device nodes, and other permissions GNOME OS builds require).

For the MCS problem specifically, the runner config.toml — rendered from a Jinja2 template via per-host Ansible variables — sets a fixed MCS label per runner type. Here’s a representative snippet from one of the runner hosts:

[[runners]]
 name = "a15948139c78"
 executor = "docker"
 [runners.docker]
 image = "quay.io/fedora/fedora:latest"
 privileged = false
 security_opt = ["label=level:s0:c100,c100"]
 devices = ["/dev/kvm", "/dev/udmabuf"]
 cap_add = ["SYS_PTRACE", "SYS_CHROOT"]

[[runners]]
 name = "a15948139c78-flatpak"
 executor = "docker"
 [runners.docker]
 image = "quay.io/gnome_infrastructure/gnome-runtime-images:gnome-master"
 privileged = false
 security_opt = ["seccomp:/home/podman/gitlab-runner/flatpak.seccomp.json", "label=level:s0:c200,c200"]
 cap_drop = ["all"]

This is the same approach GitLab’s documentation suggests, with one refinement: we use different fixed categories per runner typec100,c100 for untagged runners and c200,c200 for flatpak runners — so that flatpak builds and regular builds remain MCS-isolated from each other, even though builds of the same type share a category.

This is a pragmatic compromise, not an ideal solution. All concurrent jobs on the same runner type share the same MCS category. With concurrent: 4 on our Hetzner runners, four simultaneous untagged jobs can read each other’s /builds content. For GNOME’s use case — a community CI infrastructure where the runners are shared by GNOME project maintainers — this is an acceptable tradeoff. The alternative, leaving MCS labels random, would break every single job. But it is precisely this tradeoff that motivates exploring per-job VM isolation via microVMs.

Exploring libkrun

libkrun is a lightweight Virtual Machine Monitor (VMM) that integrates with Podman via --runtime krun, running each container inside a microVM with its own lightweight kernel. The appeal is strong: per-container VM isolation would give each job its own kernel and address space, making the MCS cross-container problem irrelevant inside the VM.

I tested libkrun on a Fedora system and hit an immediate blocker: Fatal glibc error: rseq registration failed. The rseq (Restartable Sequences) syscall was introduced in Linux kernel 5.3 and is required by glibc >= 2.35. libkrun uses a custom minimal kernel that does not expose rseq support. Since the guest images — Fedora in our case — ship modern glibc that expects rseq to be available, the process aborts at startup before any user code runs.

The libkrun kernel is compiled into the library itself and cannot be modified or replaced by the user. This is not a configuration issue but a fundamental limitation of the current libkrun release.

Even if the rseq issue were resolved, the MCS challenge would still be there — as the test script demonstrates in Test 2. On the host side, Podman assigns MCS labels to the virtiofsd process that serves the volume into the VM via virtio-fs. Different VMs get different host-side MCS labels, meaning the same :Z relabel / cross-container access denial applies. The mechanism changes from overlay mounts to virtio-fs, but the SELinux enforcement is identical: virtiofsd for the build VM runs at container_kvm_t:s0:c309,c405 and cannot access files labeled s0:c823,c999 by the helper VM’s virtiofsd.

Firecracker and the custom executor path

Firecracker is another microVM technology, the one behind AWS Lambda and Fly.io, that could provide strong per-job isolation. However, there is no native GitLab Runner executor for Firecracker. The only integration path is the Custom Executor, which requires implementing prepare, run, and cleanup scripts from scratch.

The job image is exposed via CUSTOM_ENV_CI_JOB_IMAGE, but everything else is on the operator: pulling the OCI image, extracting a rootfs, booting a Firecracker VM with the right kernel and network configuration, injecting the build script, mounting or copying the cloned repository into the VM, collecting artifacts and cache after the job finishes, and tearing the VM down. GitLab provides an LXD-based example that shows the pattern — prepare creates a container and installs dependencies, run pipes the job script into it, cleanup destroys it — but adapting that to microVMs adds the complexity of VM lifecycle management, kernel and rootfs preparation, networking, and storage. This is a significant engineering effort, essentially rebuilding the entire Docker executor workflow from scratch.

What comes next

MCS is a core SELinux feature. Type enforcement (TE) already confines processes by type — container_t can only access container_file_t, not user_home_t or httpd_sys_content_t — but TE alone cannot distinguish one container_t process from another. MCS adds that layer: by assigning each container a unique category pair, the kernel enforces isolation between processes that share the same type. Container A at s0:c100,c100 and Container B at s0:c200,c200 are both container_t, but MCS ensures they cannot touch each other’s files. The conflict with GitLab Runner’s multi-container-per-job architecture is that two containers that need to share a volume are given different categories by default. The workarounds we deploy today, including the fixed MCS labels on GNOME’s runners, trade that inter-container isolation for functionality.

The most promising direction I’ve found so far is the combination of Cloud Hypervisor and the fleeting-plugin-fleetingd plugin. Cloud Hypervisor is built on Intel’s Rust-VMM crate and is essentially a more capable sibling of Firecracker — it supports CPU and memory hotplugging, VFIO device passthrough, and virtio-fs, features that are often necessary for complex CI tasks like building large binaries or running UI tests and that Firecracker’s minimalist design deliberately omits. The fleeting-plugin-fleetingd is a community plugin for GitLab’s Instance Executor (the modern evolution of the Custom Executor) that automates the full VM lifecycle: downloading cloud images, creating Copy-on-Write disks, launching Cloud Hypervisor VMs with direct kernel boot, provisioning them via cloud-init, and tearing them down after each build. Each job gets a fresh disposable VM, which is exactly the per-job isolation model we need. The plugin already handles networking via TAP interfaces and nftables SNAT, and supports customization of the VM image through cloud-init commands — so preinstalling Podman or other build tools is straightforward.

Beyond that, I’ll also keep evaluating libkrun (promising Red Hat technology), Firecracker with a hand-rolled custom executor, and QEMU’s microvm machine type. The common denominator across all of these — except for the fleeting-plugin-fleetingd path — is that none of them have an existing GitLab Runner integration. Regardless of which microVM technology we settle on, the path forward involves either building a workflow from scratch using the Custom Executor and its prepare, run, cleanup hooks, or leveraging the fleeting plugin ecosystem that GitLab has been building around the Instance and Docker Autoscaler executors.

CVE-2026-31431

The urgency of per-job VM isolation was underscored by CVE-2026-31431 (“Copy Fail”), a nine-year-old logic bug in the kernel’s algif_aead cryptographic module disclosed at the end of April. The flaw lets an unprivileged local user write four controlled bytes into the page cache of any readable file — enough to patch a setuid binary like /usr/bin/su and escalate to root. Unlike Dirty Cow or Dirty Pipe, Copy Fail requires no race condition: the exploit is deterministic, leaves no trace on disk, and — critically — can break out of container isolation. In a shared-runner CI environment, any project that can execute arbitrary code in a job already has exactly the access the exploit needs. Separately, Claude Mythos — an Anthropic model trained for cybersecurity research that escaped its own sandbox during a red-team exercise in April — demonstrated that AI-assisted vulnerability discovery and exploitation is no longer theoretical; models can now autonomously find and chain bugs that would take human researchers weeks to exploit. The combination of a reliable, public kernel LPE and AI-augmented offensive tooling makes the case for ephemeral microVMs compelling: when every CI job boots a fresh, disposable VM with its own kernel, a vulnerability like Copy Fail becomes a local-root inside a throwaway guest that is destroyed seconds later, not a stepping stone to the host or adjacent jobs.

That should be all for today, stay tuned!

Allan Day

@aday

GNOME Foundation Update, 2026-05-01

It’s the first day of May, and it’s time for another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last post, and this update covers highlights of what we’ve been doing since then.

Remembering Seth Nickell

This week we received the very sad news of the death of Seth Nickell. It’s been a long time since Seth was active in the GNOME project, so many of our members won’t be familiar with him or his work. However, Seth played an important part in GNOME’s history, and was a special and unique character.

Jonathan wrote a wonderful post about Seth, with some great stories. Federico migrated the memorial page from the old wiki to the handbook, and added Seth there (work is currently ongoing to develop that page). Seth’s death has also been covered by LWN, which includes dedications from GNOME contributors.

Whether you knew Seth or came to GNOME after his time, I think we can all appreciate the contributions that he made, which live on in the project and wider ecosystem to this day.

GNOME Fellowship

Applications for the first round of the new GNOME Fellowship program closed last week, on 20th April. We had a great response and received some excellent proposals, and now we have the tough job of deciding who is going to receive support through the program.

To that end, the Fellowship Committee met this week to review the proposals and begin the selection process. We have identified a shortlist of candidates, and will be meeting again next week to narrow the selection further.

Since this is the first round of the Fellowship, we are establishing the selection process as we go. Hopefully we’ll get to put this to use again in future Fellowship rounds!

Conferences

Linux App Summit (LAS) will be held in Berlin on 16-17 May – that’s in a little over two weeks! The schedule has been finalized and looks great, and this year’s LAS is shaping up to be a fantastic event. Please do consider going, and please do register!

Due to high demand, the organizing team have decided to stream the talks from this year, so look out for details about remote participation.

Aside from LAS, preparations for July’s GUADEC conference continue to be worked on. Travel sponsorship is still available if you need assistance in order to attend, so do consider applying for that.

Office transitions ongoing

Work to update many of our backoffice systems and processes has continued at a steady pace over the past fortnight. Many of the big moves are done (new payments system, email accounts, mailing system, accounting procedures, credit card platform), and we are now firmly in the final stages, making sure that our new address is used everywhere, emails are going to the right places, recurring payments are transferred over to new credit cards, and vendors are setup on the new payments system.

The value of this work is already showing, with smoother accounting procedures, more up to date finance reports, and better tracking of incoming queries.

That’s it for this update. Thanks for reading, and take care.

Felipe Borges

@felipeborges

Let’s Welcome Our Google Summer of Code 2026 Contributors!

GNOME is once again participating in GSoC. This year, we have 6 contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.

As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.

GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!

Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul,  Adrian Vovk, Jonas Ådahl, and Robert Mader.

For more information, visit https://summerofcode.withgoogle.com/programs/2026/organizations/gnome-foundation

Sophie Herold

@sophieherold

Testing Library Code in GNOME OS

Yesterday, I wanted to debug a glycin (or Shell) issue on GNOME OS. Turns out, there is currently no documentation that works or includes all necessary steps.

Here is the simplest variant if you don’t develop on GNOME OS and have an internet connection that can download 16 GB in a reasonable amount of time.

First we get a toolbox image to build our code.

$ toolbox create gnomeos-nightly -i quay.io/gnome_infrastructure/gnome-build-meta:gnomeos-devel-nightly

After entering the toolbox with

$ toolbox enter gnomeos-nightly

we can clone and build our project with sysext-utils that are included in our image:

$ meson setup ./build --prefix /usr --libdir="lib/$(gcc -print-multiarch)"
$ sysext-build example ./build

This creates a example.sysext.raw file.

Now, we need a GNOME OS to test our build. We can download the image and install it in Boxes. After logging in, we can just drag and drop the example.sysext.raw into the VM.

Before we can install it, we need to get the development tools for our VM:

$ run0 updatectl enable devel --now

After that, we need to restart the VM.

Finally, we can test our build:

$ run0 sysext-add ~/Downloads/example.sysext.raw

Adding the --persistent flag to this command will make the changes stay active across reboots.

If the changes made it impossible to boot into the VM again, we can start the VM in “Safe mode” from the boot menu. After logging in, we can manually remove the extension:

$ run0 rm /var/lib/extensions/example.raw

Happy hacking!

vixalien

@sav

A love letter to mise

Recently, I have been using GNOME OS, as my daily driver.

After being a seasoned Linux for long, dabbling in distros like Alpine Linux, Arch Linux, Fedora (and even Silverblue), I tried switching to something more opinionated and that "works by default" all while being hard to break.

And given my existing relationship with GNOME, GNOME OS was a choice worth looking into.

One feature of GNOME OS is that it is immutable (i.e. system files are read-only). It also doesn't ship with a package manager, so it doesn't have functionality built-in to install extra packages.

You can install GUI Applications normally using Flathub (and Snap/AppImage), but installing non-GUI applications like development tools or CLI packages is not built-in.

There are of course several solutions you can use, such as homebrew, coldbrew, but today we will focus on mise.

What is mise?

mise pitches itself as "One tool to manage languages, env vars, and tasks per project, reproducibly."

However, I only use a fraction of it's functionality, in that I only use it to install packages.

How to install it?

The instructions are here: https://mise.jdx.dev/getting-started.html

But essentially it's as easy as running this (remember to read the source of the installer first):

curl https://mise.run | sh

Activating mise

Then you will need to "activate" mise, which essentially makes tools installed by mise available by modifying your $PATH variable

echo 'eval "$(~/.local/bin/mise activate bash --shims)"' >> ~/.bashrc

The instructions above are for bash, so you will need to consult the docs to get instructions for your shell.

You will need to re-login for the mise command to be available, or open a new shell.

A note on shims

Feel free to skip this section, as it's just an explainer

Also, note that the above command use the --shims flag, which is NOT the default. It essentially means that mise will modify the $PATH variable, instead of doing a weird thing where it will re-activate itself after each command you run.

The non-shim way to activate mise is useful when you use mise to install different package versions across different repositories, but that sometimes breaks IDEs and is our of the scope of this blog post.

Installing packages

You can start installing your first package with mise:

mise use -g java

The above command installs java globally (hence the -g flag), which you can now confirm by running:

$ java --version
openjdk 26.0.1 2026-04-21
OpenJDK Runtime Environment (build 26.0.1+8-34)
OpenJDK 64-Bit Server VM (build 26.0.1+8-34, mixed mode, sharing)

You can install much more tools, of which you can find a non-complete list here: mise-tools.

For example, you can similarly install a specific major version of nodejs

mise use -g node@22

Or install the latest LTS version of node

mise use -g node@lts

Or you can be overlay specific

mise use -g node@v25.9.0
mise use -g node@25.9.0 # this works too!

Searching

Use mise search to find packages.

mise search typ
Tool       Description                                                                                                                            
typos      Source code spell checker. https://github.com/crate-ci/typos
typst      A new markup-based typesetting system that is powerful and easy to learn. https://github.com/typst/typst
typstyle   Beautiful and reliable typst code formatter. https://github.com/Enter-tainer/typstyle
quicktype  Generate types and converters from JSON, Schema, and GraphQL provided by https://quicktype.io. https://www.npmjs.com/package/quicktype

Uninstalling

mise unuse -g node

Updating

mise self-update # updating mise itself
mise up          # updating tools installed by mise
mise outdated    # checking if you have outdated tools

Config File

Tools you install with mise globally will be saved in the file ~/.config/mise/config.toml, which you can commit to your dotfiles so you can have similar tools across different machines.

Here's an example of my mise config file at the time of writing this blog post.

# ~/.config/mise/config.toml
[tools]
bat = "latest"
btop = "latest"
bun = "latest"
caddy = "latest"
"cargo:mergiraf" = "latest"
deno = "latest"
difftastic = "latest"
doggo = "latest"
fastfetch = "latest"
fzf = "latest"
github-cli = "latest"
"github:railwayapp/railpack" = "latest"
glab = "latest"
helix = "latest"
java = "latest"
lazygit = "latest"
node = "latest"
"npm:vscode-langservers-extracted" = "latest"
oha = "latest"
pipx = "latest"
pnpm = "latest"
prettier = "latest"
rust = "latest"
scooter = "latest"
tmux = "latest"
usage = "latest"
yt-dlp = { version = "latest", rename_exe = "yt-dlp" }
zellij = "latest"
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }
rclone = "latest"
mc = "latest"
go = "latest"
"go:git.sr.ht/~migadu/alps/cmd/alps" = "latest"
"npm:localtunnel" = "latest"

After the tools inside the config has changed, you can run the following comand to make mise re-install packages from the config file

mise install

Mise Backends

Mise is able to install packages from multiple sources. These sources are called "backends" by mise.

When you type mise use -g node@22, it will resolve node against the registry and figure out that the default backend for node is core

Core

The default backend is called core and tools from this backend are usually provided from the official source.

Other tools that are available from core include Node.js, Ruby, Python, etc...

We could also have been explicit with the backend we want to use

mise use -g core:node

You can find a list of all core packages here.

Aqua

You can also install packages from the Aqua registry.

Language Package Managers

You can also install tools from their respective package managers. Here are a few examples

npm

You can install prettier, typescript, oxlint and other JavaScript/TypeScript tools published on the npm registry. Find the tools on npm

mise use -g npm:prettier

pipx

You can install black, poetry and other Python tools from pypi. Find the tools on pypi

mise use -g pipx:black
pipx:git+https://github.com/psf/black.git # from a github repo

cargo

You can install cargo packages with this backed. You need to have rust installed beforehand though, which you can do with mise

mise use -g rust

Then install your packages

mise use -g cargo:eza

There are more language package manager backends like: gem, go and more.

Github

You can install packages from Github directly, as long as the project you are trying to install from uses Github releases

mise use -g github:railwayapp/railpack

mise will usually auto-detect which asset you want to use, but you can also specify the asset glob in ~/.config/mise/config.toml

[tools]
"github:patryk-ku/music-discord-rpc" = { version = "latest", asset_pattern = "music-discord-rpc" }

Remembering Seth

I heard the news about Seth Nickell’s passing last week, and have been in a bit of a funk ever since.

Seth was brilliant, iconoclastic, fearless.

It’s been a long while since Seth was an active part of the GNOME Community, but his influence on the project can still be seen in its DNA if you know where to look. He arrived on the GNOME scene while still in school with hundreds of ideas on how to improve things. It was an interesting time: We had just launched GNOME 1.5 and were searching for a new path towards GNOME 2.0. The Sun usability study had been published and the community had internalized the need to change directions. Seth rolled up his sleeves and did the work needed to help light that path.

Seth championed radical proposals such as instant apply, button ordering, message dialog fixes, and more. He cleaned up the control-center proposing some of the most visible changes from GNOME 1 to 2. He also did the initial designs for epiphany, pushing for a cleaner browser experience during an era of high browser complexity. He had a vision of desktops as a democratic tool, as easy and natural to use as any other tool in the human experience.

As a designer, Seth was focused on trying to understand who we were designing for and making sure we were solving problems for them. While he wasn’t beyond fixing paddings / layouts, he wanted to get the Big Picture right. He wasn’t beyond rolling up his sleeves writing code to move things forward, but was at his best as a champion and visionary, arguing for us to take risks and continue to innovate.

Spending time was Seth was a hoot. He had such a flair for the dramatic. I remember…

  • …the time he sold the design for what would become NetworkManager to a bunch of engineers. He got up on the stage and announced: “We are going to make this [holding an ethernet cable] as easy to use as this [producing a power plug]!” It’s hard to describe how many steps it took to set up networking back then.
  • …his vision of an improved messaging system — Project Yarrr. He used ☠ (U+2620) as the SVN repo name partially to see how many internal tools weren’t UTF-8 clean.
  • …him breaking out into an operatic rendition of “Tradition” when  developers were pushing back on a change he was proposing.
  • …the time he changed everyone’s background in the RH office to have crop circles over night. He showed up the next morning in a robe dressed as an old-testament prophet, beating a drum and carrying a “RHEL5 IS NIGH” sign.
  • …hanging  printouts of hate mail he got for various design choices outside of the Mega Cube (a group activity)!
  • And everyone who was around for the Dark Princess Incident will always remember it.

Being one of the public faces of GNOME2 was hard, and he moved on. Later, he worked on OLPC and Sugar, and made his mark there. After that, he seemed to travel a lot. We lost touch, though he’d reappear every couple of years to say hi. I hope he found what he was looking for.

Farewell, my friend. The world now has less color in it.

TIL that Yubikeys are convenient for Linux login

I got myself a Yubikey recently, and I wanted to use it as a nice convenience to:

  1. Grant me sudo privileges
  2. Unlock my session
  3. Decrypt my LUKS-encrypted disk

I've only managed to do the first two, since they both rely on Linux Pluggable Authentication Modules (PAM). Luckily for me, one of PAM's modules supports U2F, the standard Yubikeys rely on.

First I need to install pam-u2f to add U2F support to PAM, and pamu2fcfg to configure my key.

$ sudo rpm-ostree install pam-u2f pamu2fcfg

Since I'm running an immutable OS I need to reboot, and then I can create the correct directory and file to dump an U2F key into it.

$ mkdir -p ~/.config/Yubico
$ pamu2fcfg > ~/.config/Yubico/u2f_keys

Then I make sure to have a root session open in case I lock myself out of sudoers.

$ sudo su
#

In a different terminal, I can edit the sudoers file to add this line

#%PAM-1.0
auth       sufficient   pam_u2f.so cue openasuser
auth       include      system-auth
account    include      system-auth
password   include      system-auth
session    optional     pam_keyinit.so revoke
session    required     pam_limits.so
session    include      system-auth

I save this file and open a new terminal. I type in sudo vi and it asks me to touch my FIDO authenticator before opening vi! If I touch the Yubikey, it indeed opens vi with root privileges.

Let's break down the line:

  • auth for authentication
  • sufficient passing this authentication challenge is enough (it's not an additional factor of authentication)
  • pam_u2f.so the module we load is for U2F, the standard Yubikeys use
  • cue print "Please touch the FIDO authenticator." when the user needs to authenticate
  • openasuser to fetch the authentication file without root privileges

It's also possible to use it to unlock my session, but it would be a bit reckless to allow anyone with my Yubikey to log into my laptop. If my backpack gets stolen and it has both my Yubikey and my laptop, anyone can log in.

It's possible to make the login screen require either my user password, or all of

  • The Yubikey itself
  • The PIN of the Yubikey
  • Me to touch the Yubikey

If someone fails more than three times to enter the correct PIN, the Yubikey will lock itself and require a PUK to be unlocked. This gives me an additional layer of security, and it's more convenient than having to type a full length passphrase.

I've added the following line to /etc/pam.d/greetd (the greeter I use):

#%PAM-1.0
auth       sufficient  pam_u2f.so cue openasuser pinverification=1 userpresence=1
auth       substack    system-auth
[...]

[!warning] I can lose my Yubikey

I use my Yubikey as a nice convenience to set up a weaker PIN while not compromising too much on security. I use it instead of a password, no in addition to it.

Since I can lose or break my Yubikey and I don't want to buy two of them, I make the U2F login sufficient but not required. This means I can still fallback to password authentication if I lose my Yubikey.

Finally, DankMaterialShell uses its own lockscreen manager too. I still want to be able to fallback to password authentication if need be, so I'll configure it to accept U2F OR the password, not both.

This means that the lockscreen will call /etc/pam.d/dankshell-u2f to know what to do when the screen is locked. Since this file doesn't exist, I can create it with the following content.

#%PAM-1.0
auth sufficient pam_u2f.so cue openasuser pinverification=1 userpresence=1

I need a fallback for when I don't have my Yubikey, so I also create the one for this occasion

#%PAM-1.0
auth include system-auth

Finally, I have a consistent setup where both my login and lock screen require me to plug my key, enter its PIN and touch it, or enter my full password. When it comes to sudo, I can only touch my key without requiring an PIN.

My next quest will be to use my Yubikey to unlock my LUKS-encrypted disk.

Jordan Petridis

@alatiera

Goblins in your toolchain

At the start of the month, Bilal gave us all a giant gift with Goblint. On the first week it was already impressive. Now it’s an invaluable tool for anyone that ever interfaced with GObject, glib or GTK. It will catch leaks, bugs, or even offer to auto fix and modernize your code to the modern paradigms we use. It’s one of those things that is going to save countless hours of debugging and more importantly, prevent the issues before they even get committed. Jonathan Blandford wrote about using it two days ago, and I suggest you read the post.

Everyone is trying to use goblint, and we are all stumbling upon the same issues integrating it into our tooling. Initially, it was only able to produce Sarif reports, which GitLab still has behind a feature flag, in addition to only  be available in GitLab Enterprise Editions.

I added an export for GitLab’s Code Quality format which has some support in the non-proprietary Community Edition we use in the GNOME and Freedesktop.org instances. Sadly, almost everything nice is still only available in the enterprise editions, but at least there is this little Widget in the Merge Requests page.

A screenshot of the linked Merge Request showcasing the Code Quality GitLab widget.

Additionally, we now have CI templates for Goblint. One is adding a job to the existing gnomeos-basic-ci component we use everywhere. Simply go to your latest pipeline and look for the job.

A screenshot of the linked job and its output log

The report will also show up in Merge Requests that have been updated since yesterday.  The gnomeos-basic-ci has other goodies like sanitizers, static analyzers, test coverage, etc wired out of the box, so you should give it a try if you are not using it yet.

If you do but don’t want the goblint job, you can disable it easily with inputs: goblint: "disabled" similar to all the other tools the component provides.

include:
  - project: "GNOME/citemplates"
    file: "templates/default-rules.yml"
  - component: "gitlab.gnome.org/GNOME/citemplates/gnomeos-basic-ci@26.1"

If you want only a goblint job, I’ve also added a standalone template that you can use. (Or copy-paste from it).

include:
  - component: "gitlab.gnome.org/GNOME/citemplates/goblint@26.1"
    inputs:
      job-stage: "lint"

In order for the Code Quality report to work, you will need to have a report uploaded from your target branch, so GitLab will have something to compare the one from the merge request with. The template rules will handle that for you, but keep it in mind.

At this moment all the lints are warnings so the job will never be fatal. This is why we can enabled it by default without worrying about breaking pipelines for now. You can further configure its behavior to your needs, and error out if you want to, through the configuration file.

min_glib_version = "2.76"

[rules.g_declare_semicolon]
level = "ignore"

[rules.untranslated_string]
level = "error"
ignore = ["**/test-*.c"]

It’s also very likely that we are going to add goblint and its LSP server to the GNOME SDK Flatpak runtime, along with GNOME OS, so it will always be available for use with tools like Builder and foundry.

Enjoy

Jakub Steiner

@jimmac

Revert That Vector Nonsense!

A few years back I did a quick exploration of what GNOME app icons might look like in an alternate universe where we kept on using VGA displays. Chiselling pixels away is therapeutic. So while there is absolutely no use for these, I keep on making them if only to bring some attention to what really matters for GNOME, having nice apps.

Here's a batch of mostly GNOME Circle app icons, with some 3rd party ones thrown in.

Pixel art GNOME app icons, batch 1 Pixel art GNOME app icons, batch 2 Pixel art GNOME app icons, batch 3 Pixel art GNOME app icons, batch 4 Pixel art GNOME app icons, batch 5 Pixel art GNOME app icons, batch 6 Pixel art GNOME app icons, batch 7

If you're reading this on my site rather than Planet GNOME or some flickering terminal in an abandoned Vault, then congratulations. You've stumbled upon a working Pip-Boy module! Found it half-buried under irradiated rubble, its phosphor display still humming with that familiar green glow. Enjoy these icons the way the dwellers of Vault 101 were always meant to, one glorious scanline at a time.

Michael Catanzaro

@mcatanzaro

git config am.threeWay

If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:

$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
error: patch failed: gio/gfileattribute.c:166
error: gio/gfileattribute.c: patch does not apply
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"

This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.

Here is the solution, which I wish I had learned long ago:

$ git config --global am.threeWay true

This enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:

$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
Using index info to reconstruct a base tree...
M	gio/gfileattribute.c
Falling back to patching base and 3-way merge...
Auto-merging gio/gfileattribute.c
CONFLICT (content): Merge conflict in gio/gfileattribute.c
error: Failed to merge in the changes.
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"

Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.

I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!

Goblint Notes

I was excited to see Bilal’s announcement of goblint, and I’ve spent the past week getting Crosswords to work with it. This is a tool I’ve always wanted and I’m pretty convinced it will be a great boon for the GNOME ecosystem. I’m posting my notes in hope that more people try it out:

  • First and most importantly, Bilal has been so great to work with. I have filed ~20 issues and feature requests and he fixed them all very quickly. In some cases, he fixed the underlying issue before I completed adding annotations to the code.
  • Most of the issues flagged were idiomatic and stylistic, but it did find real bugs. It found a half-dozen leaks, a missing g_timeout removal, and five missing class function chain ups. One was a long-standing crasher. There’s a definite improvement in quality from adopting this tool.
  • I’m also excited about pairing this with new GSoC interns. The types of things goblint flags are the things that students hit in particular (when they don’t write it all their code with AI). I think goblint will be even more important to our ecosystem as a teaching tool to our C codebase. It’s already effectively replaced my styleguide.
  • In a few instances, the use_g_autoptr rule outstripped static-scan’s ability to track leaks. Ultimately, I ended up annotating and removing the g_autoptr() calls as I couldn’t get the two to play nicely together.
  • Along the same lines, cairo, pango, and librsvg all lack G_DEFINE_AUTOPTR_CLEANUP_FUNC. It would be really great if we could fix these core libraries. In the meantime, you can add the following to your project’s goblint.toml file:
[rules.use_g_autoptr_inline_cleanup]
level = "error"
ignore_types = ["cairo_*", "Pango*", "RsvgHandle"]
  • I had some trouble getting the pipeline integrated with GNOME’s gitlab. The gitlab recipe on his page uses premium features unavailable in the self hosted version. If it’s helpful for others, here’s what I ended up using:
goblint:
  stage: analysis
  extends:
    - "opensuse-container@x86_64.stable"
    - ".fdo.distribution-image@opensuse"
  needs:
    - job: opensuse-container@x86_64.stable
      artifacts: false
  before_script:
    - source ci/env.sh
    - cargo install --git https://github.com/bilelmoussaoui/goblint goblint
  script:
    # Goblint is fast. We run it twice: Once to generate the report,
    # and a second time to display the output and triger an error
    - /root/.cargo/bin/goblint . --format sarif > goblint.sarif || true
    - /root/.cargo/bin/goblint . --format text
  artifacts:
    reports:
      sast: goblint.sarif
    when: always

YMMV

Status update, 23rd April 2026

Hello there,

You thought I’d given up on “status update” blog posts, did you ? I haven’t given up, despite my better judgement, this one is just even later than usual.

Recently I’ve been using my rather obscure platform as a blogger to theorize about AI and the future of the tech industry, mixed with the occasional life update, couched in vague terms, perhaps due to the increasing number of weirdos in the world who think doxxing and sending death threats to open source contributors is a meaningful use of their time.

In fact I do have some theories about how George Orwell (in “Why I Write”) and Italo Calvino (in “If On a Winter’s Night a Traveller”) made some good guesses from the 20th century about how easy access to LLMs would affect communication, politics and art here in the 21st. But I’ll leave that for another time.

It’s also 8 years since I moved to this new country where I live now, driving off the boat in a rusty transit van to enjoy a series of unexpected and amazing opportunities. Next week I’m going to mark the occasion with a five day bike ride through the mountains of Asturias, something I’ve been dreaming of doing for several years. But I’m not going to talk about that, either.

The original idea of writing a monthly post was to keep tabs on various open source software projects I sometimes manage to contribute to, and perhaps even to motivate me to do more such volunteering. Well that part didn’t work, house renovations and an unexpectedly successful gig playing synth and trombone took over all my free time; but after many years of working on corporate consultancy and doing a little open source in the background, I’m trying to make a space at work to contribute in the open again.

I could tell the whole story here of how Codethink became “the build system people”. Maybe I will actually. It all started with BuildStream. In fact, that’s not even true. it all started in 2011 when some colleagues working with MeeGo and Yocto thought, “This is horrible, isn’t it?”

They set out to create something better, and produced Baserock, which unfortunately turned out even worse. But it did have some good ideas. The concept of “cache keys” to identify build inputs and content-addressed storage to hold build outputs began there, as did the idea of opening a “workspace” to make drive-by changes in build inputs within a large project.

BuildStream took this core idea, extended it to support arbitrary source kinds and element kinds defined by plugins, and added a shiny interface on top. initially It used OSTree to store and distribute build artifacts, later migrating to the Google REAPI with the goal of supporting Enterprise(TM) infrastructure. You can even use it alongside Bazel, if you like having three thousand commandline options at your disposal.

Unfortunately it was 2016, so we wrote the whole thing in Python. (In our defence, the Rust programming language had only recently hit 1.0 and crates.io was still a ghost town, and we’d probably still be rewriting the ruamel.yaml package in Rust if we had taken that road.) But the company did make some great decisions, particularly making a condition of success for the BuildStream project that it could unify the 5 different build+integration systems that GNOME release team were maintaining. And that success meant not just a prototype of this, but release team actually using BuildStream to make releases. Tristan even ended up joining the GNOME release team for a while. We discussed it all at the 2017 Manchester GUADEC event, coincidentally. It was a great time. (Aside from those 6 months leading up to the conference.)

At this point, the Freedesktop SDK already existed, with the same rather terrible name that it has today, and was already the base runtime for this new app container tool that was named… xdg-app. (At least that eventually gained a better name). However, if you can remember 8 years ago, it had a very different form than today. Now, my memory of what happened next is especially hazy at this point, because like I told you in the beginning, I was on a boat with my transit van heading towards a new life in Spain. All I have to go on 8 years later is the Git history, but somehow the Freedesktop SDK grew a 3-stage compiler bootstrap, over 600 reusable BuildStream elements, its own Gitlab namespace, and even some controversial stickers. As a parting gift I apparently added support for building VMs, the idea being that we’d reinstate the old GNOME Continuous CI system that had unfortunately died of neglect several years earlier. This idea got somewhat out of hand, let’s say.

It took me a while to realize this, but today Freedesktop SDK is effectively the BuildStream reference distribution. What Poky is to BitBake in the Yocto project, this is what Freedesktop SDK is to BuildStream. And this is a pretty important insight. It explains the problem you may have experienced with the BuildStream documentation: you want to build some Linux package, so you read through the manual right to the end, and then you still have no fucking idea how to integrate that package.

This isn’t a failure on the part of the authors, instead the issue is that your princess is in another castle. Every BuildStream project I’ve ever worked on has junctioned freedesktop-sdk.git and re-used the elements, plugins, aliases, configurations and conventions defined there, all of which are rigorously undocumented. The Freedesktop SDK Guide, for reasons that I won’t go into, doesn’t venture much further than than reminding you how to call Make targets.

And this is something of a point of inflection. The BuildStream + Freedesktop SDK ecosystem has clearly not displaced Yocto, nor for that matter Linux Mint. But, like many of my favourite musicians, it has been quietly thriving in obscurity. People I don’t know are using it to do things that I don’t completely understand. I’ve seen it in comparison articles, and even job adverts. ChatGPT can generate credible BuildStream elements about as well as it can generate Dockerfiles (i.e. not very well, but it indicates a certain level of ubiquity). There have been conferences, drama, mistakes, neglect. It’s been through an 8 person corporate team hyper-optimizing the code, and its been though a mini dark age where volunteers thanklessly kept the lights on almost single handledly, and its even survived its transition to the Apache Foundation.

Through all of this, the secret to its success probably that its just a really nice tool to work with. As much as you can enjoy software integration, I enjoy using BuildStream to do it; things rarely break, when they do its rarely difficult to fix them, and most importantly the UI is really colourful! I’m now using it to build embedded system images for a product named CTRL, which you can think of as.. a Linux distribution. There are some technical details to this which I’m working to improve, which I won’t bore you with here.

I also won’t bore you with the topic of community governance this month, but that’s what’s currently on my mind. If you’ve been part of the GNOME Foundation for a few years, you’ll know this something that’s usually boring and occasionally becomes of almost life-or-death importance. The “let’s just be really sound” model works great, until one day when you least expect it, and then suddenly it really doesn’t. There is no perfect defence against this, and in open source communities its our diversity that brings the most resilience. When GNOME loses, KDE gains, and that way at least we still don’t have to use Windows. Indeed, this is one argument for investing in BuildStream even if it remains forever something of a minority sport. I guess I just need to remember that when you have to start thinking hard about governance, that’s a sign of success.

How Hard Is It To Open a File?

It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:

  • very simple, just call the standard library function
  • extremely hard, don’t trust anything

If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.

Opening a File, the Hard Way

Like so often, the details depend on the specifics, but in the worst-case scenario, there is a process on either side of the security boundary, which operate on a filesystem tree which is shared by both processes.

Let’s say that the process with more privileges operates on a file on behalf of the process with less privileges. You might want to restrict this to files in a certain directory, to prevent the less privileged process from, for example, stealing your SSH key, and thus take a subpath that is relative to that directory.

The first obvious problem is that the subpath can refer to files outside of the directory if it contains ... If the privileged process gets called with a subpath of ../.ssh/id_ed25519, you are in trouble. Easy fix: normalize the path, and if we ever go outside of the directory, fail.

The next issue is that every component of the path might be a symlink. If the privileged process gets called with a subpath of link, and link is a symlink to ../.ssh/id_ed25519, you might be in trouble. If the process with less privileges cannot create files in that part of the tree, it cannot create a malicious symlink, and everything is fine. In all other scenarios, nothing is fine. Easy fix: resolve the symlinks, expand the path, then normalize it.

This is usually where most people think we’re done, opening a file is not that hard after all, we can all do more fun things now. Really, this is where the fun begins.

The fix above works, as long as the less privileged process cannot change the file system tree anywhere in the file’s path while the more privileged process tries to access it. Usually this is the case if you unpack an attacker-provided archive into a directory the attacker does not have access to. If it can however, we have a classic TOCTOU (time-of-check to time-of-use) race.

We have the path foo/id_ed25519, we resolve the smlinks, we expand the path, we normalize it, and while we did all of that, the other process just replaced the regular directory foo that we just checked with a symlink which points to ../.ssh. We just checked that the path resolves to a path inside the target directory though, and happily open the path foo/id_ed25519 which now points to your ssh key. Not an easy fix.

So, what is the fundamental issue here? A path string like /home/user/.local/share/flatpak/app/org.example.App/deploy describes a location in a filesystem namespace. It is not a reference to a file. By the time you finish speaking the path aloud, the thing it names may have changed.

The safe primitive is the file descriptor. Once you have an fd pointing at an inode, the kernel pins that inode. The directory can be unlinked, renamed, or replaced with a symlink; the fd does not care. A common misconception is that file descriptors represent open files. It is true that they can do that, but fds opened with O_PATH do not require opening the file, but still provide a stable reference to an inode.

The lesson that should be learned here is that you should not call any privileged process with a path. Period. Passing in file descriptors also has the benefit that they serve as proof that the calling process actually has access to the resource.

Another important lesson is that dropping down from a file descriptor to a path makes everything racy again. For example, let’s say that we want to bind mount something based on a file descriptor, and we only have the traditional mount API, so we convert the fd to a path, and pass that to mount. Unfortunately for the user, the kernel resolves the symlinks in the path that an attacker might have managed to place there. Sometimes it’s possible to detect the issue after the fact, for example by checking that the inode and device of the mounted file and the file descriptor match.

With that being said, sometimes it is not entirely avoidable to use paths, so let’s also look into that as well!

In the scenario above, we have a directory in which we want all the paths to resolve in, and that the attacker does not control. We can thus open it with O_PATH and get a file descriptor for it without the attacker being able to redirect it somewhere else.

With the openat syscall, we can open a path relative to the fd we just opened. It has all the same issues we discussed above, except that we can also pass O_NOFOLLOW. With that flag set, if the last segment of the path is a symlink, it does not follow it and instead opens the actual symlink inode. All the other components can still be symlinks, and they still will be followed. We can however just split up the path, and open the next file descriptor for the next path segment and resolve symlinks manually until we have done so for the entire path.

libglnx chase

libglnx is a utility library for GNOME C projects that provides fd-based filesystem operations as its primary API. Functions like glnx_openat_rdonly, glnx_file_replace_contents_at, and glnx_tmpfile_link_at all take directory fds and operate relative to them. The library is built around the discipline of “always have an fd, never use an absolute path when you can use an fd.”

The most recent addition is glnx_chaseat, which provides safe path traversal, and was inspired by systemd’s chase(), and does precisely what was described above.

int glnx_chaseat (int              dirfd,
                  const char      *path,
                  GlnxChaseFlags   flags,
                  GError         **error);

It returns an O_PATH | O_CLOEXEC fd for the resolved path, or -1 on error. The real magic is in the flags:

typedef enum _GlnxChaseFlags {
  /* Default */
  GLNX_CHASE_DEFAULT = 0,
  /* Disable triggering of automounts */
  GLNX_CHASE_NO_AUTOMOUNT = 1 << 1,
  /* Do not follow the path's right-most component. When the path's right-most
   * component refers to symlink, return O_PATH fd of the symlink. */
  GLNX_CHASE_NOFOLLOW = 1 << 2,
  /* Do not permit the path resolution to succeed if any component of the
   * resolution is not a descendant of the directory indicated by dirfd. */
  GLNX_CHASE_RESOLVE_BENEATH = 1 << 3,
  /* Symlinks are resolved relative to the given dirfd instead of root. */
  GLNX_CHASE_RESOLVE_IN_ROOT = 1 << 4,
  /* Fail if any symlink is encountered. */
  GLNX_CHASE_RESOLVE_NO_SYMLINKS = 1 << 5,
  /* Fail if the path's right-most component is not a regular file */
  GLNX_CHASE_MUST_BE_REGULAR = 1 << 6,
  /* Fail if the path's right-most component is not a directory */
  GLNX_CHASE_MUST_BE_DIRECTORY = 1 << 7,
  /* Fail if the path's right-most component is not a socket */
  GLNX_CHASE_MUST_BE_SOCKET = 1 << 8,
} GlnxChaseFlags;

While it doesn’t sound too complicated to implement, a lot of details are quite hairy. The implementation uses openat2, open_tree and openat depending on what is available and what behavior was requested, it handles auto-mount behavior, ensures that previously visited paths have not changed, and a few other things.

An Aside on Standard Libraries

The POSIX APIs are not great at dealing with the issue. The GLib/Gio APIs (GFile, etc.) are even worse and only accept paths. Granted, they also serve as a cross-platform abstraction where file descriptors are not a universal concept. Unfortunately, Rust also has this cross-platform abstraction which is based entirely on paths.

If you use any of those APIs, you very likely created a vulnerability. The deeper issue is that those path-based APIs are often the standard way to interact with files. This makes it impossible to reason about the security of composed code. You can audit your own code meticulously, open everything with O_PATH | O_NOFOLLOW, chain *at() calls carefully — and then call a third-party library that calls open(path) internally. The security property you established in your code does not compose through that library call.

This means that any system-level code that cares about filesystem security has to audit all transitive dependencies or avoid them in the first place.

So what would a better GLib cross-platform API look like? I would say not too different from chaseat(), but returning opaque handles instead of file descriptors, which on Unix would carry the O_PATH file descriptor and a path that can be used for printing, debugging and things like that. You would open files from those handles, which would yield another kind of opaque handle for reading, writing, and so on.

The current GFile was also designed to implement GVfs: g_file_new_for_uri("smb://server/share/file") gives you a GFile you can g_file_read() just like a local file. This is the right goal, but the wrong abstraction layer. Instead, this kind of access should be provided by FUSE, and the URI should be translated to a path on a specific FUSE mount. This would provide a few benefits:

  • The fd-chasing approach works everywhere because it is a real filesystem managed by the kernel
  • The filesystem becomes independent of GLib and can be used for example from Rust as well
  • It stacks with other FUSE filesystems, such as the XDG Desktop Document Portal used by Flatpak

Wait, Why Are You Talking About This?

Nowadays I maintain a small project called Flatpak. Codean Labs recently did a security analysis on it and found a number of issues. Even though Flatpak developers were aware of the dangers of filesystems, and created libglnx because of it, most of the discovered issues were just about that. One of them (CVE-2026-34078) was a complete sandbox escape.

flatpak run was designed as a command-line tool for trusted users. When you type flatpak run org.example.App, you control the arguments. The code that processes the arguments was written assuming the caller is legitimate. It accepted path strings, because that’s what command-line tools accept.

The Flatpak portal was then built as a D-Bus service that sandboxed apps could call to start subsandboxes — and it did this by effectively constructing a flatpak run invocation and executing it. This connected a component designed for trusted input directly to an untrusted caller (the sandboxed app).

Once that connection exists, every assumption baked into flatpak run about caller trustworthiness becomes a potential vulnerability. The fix wasn’t “change one function” — it was “audit the entire call chain from portal request to bubblewrap execution and replace every path string with an fd.” That’s commits touching the portal, flatpak-run, flatpak_run_app, flatpak_run_setup_base_argv, and the bwrap argument construction, plus new options (--app-fd, --usr-fd, --bind-fd, --ro-bind-fd) threaded through all of them.

If the GLib standard file and path APIs were secure, we would not have had this issue.

Another annoyance here is that the entire subsandboxing approach in Flatpak comes from 15 years ago, when unprivileged user namespaces were not common. Nowadays we could (and should) let apps use kernel-native unprivileged user namespaces to create their own subsandboxes.

Unfortunately with rather large changes comes a high likelihood of something going wrong. For a few days we scrambled to fix a few regressions that prevented Steam, WebKit, and Chromium-based apps from launching. Huge thanks to Simon McVittie!

In the end, we managed to fix everything, made Flatpak more secure, the ecosystem is now better equipped to handle this class of issues, and hopefully you learned something as well.

Jussi Pakkanen

@jpakkane

CapyPDF is approaching feature sufficiency

In the past I have written many blog posts on implementing various PDF features in CapyPDF. Typically they explain the feature being implemented, how confusing the documentation is, what perverse undocumented quirks one has to work around to get things working and so on. To save the effort of me writing and you reading yet another post of the same type, let me just say that you can now use CapyPDF to generate PDF forms that have widgets like text fields and radio buttons.

What makes this post special is that forms and widget annotations were pretty much the last major missing PDF feature Does that mean that it supports everything? No. Of course not. There is a whole bunch of subtlety to consider. Let's start with the fact that the PDF spec is massive, close to 1000 pages. Among its pages are features that are either not used or have been replaced by other features and deprecated.

The implementation principle of CapyPDF thus far has been "implement everything that needs special tracking, but only to the minimal level needed". This seems complicated but is in fact quite simple. As an example the PDF spec defines over 20 different kinds of annotations. Specifying them requires tracking each one and writing out appropriate entries in the document metadata structures. However once you have implemented that for one annotation type, the same code will work for all annotation types. Thus CapyPDF has only implemented a few of the most common annotations and the rest can be added later when someone actually needs them.

Many objects have lots of configuration options which are defined by adding keys and values to existing dictionaries. Again, only the most common ones are implemented, the rest are mostly a matter of adding functions to set those keys. There is no cross-referencing code that needs to be updated or so on. If nobody ever needs to specify the color with which a trim box should be drawn in a prepress preview application, there's no point in spending effort to make it happen.

The API should be mostly done, especially for drawing operations. The API for widgets probably needs to change. Especially since form submission actions are not done. I don't know if anything actually uses those, though. That work can be done based on user feedback.

TIL that Minikube mounts volumes as root

When I have to play with a container image I have never met before, I like to deploy it on a test cluster to poke and prod it. I usually did that on a k3s cluster, but recently I've moved to Minikube to bring my test cluster with me when I'm on the go.

Minikube is a tiny one-node Kubernetes cluster meant to run on development machines. It's useful to test Deployments or StatefulSets with images you are not familiar with and build proper helm charts from them.

It provides volumes of the hostPath type by default. The major caveat of hostPath volumes is that they're mounted as root by default.

I usually handle mismatched ownership with a securityContext like the following to instruct the container to run with a specific UID and GID, and to make the volume owned by a specific group.

Typically in a StatefulSet it looks like this:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
# [...]
spec:
# [...]
  template:
# [...]
    spec:
      securityContext:
        runAsUser: 10001
        runAsGroup: 10001
        fsGroup: 10001
      containers:
        - name: myapp
          volumeMounts:
            - name: data
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
# [...]

In this configuration:

  • Processes in the Pod myapp will run with UID 10001 and GID 10001.
  • The /data directory mounted from the data volume will belong to group 10001 as well.

The securityContext usually solves the problem, but that's not how hostPath works. For hostPath volumes, the securityContext.fsGroup property is silently ignored.

[!success] Init Container to the Rescue!

The solution in this specific case is to use an initContainer as root to chown the volume mounts to the unprivileged user.

In practice it will look like this.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
# [...]
spec:
# [...]
  template:
# [...]
    spec:
      securityContext:
        runAsUser: 10001
        runAsGroup: 10001
        fsGroup: 10001
      initContainers:
        - name: fix-perms
          image: busybox
          command:
            ["sh", "-c", "chown -R 10001:10001 /data"]
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: data
              mountPath: /data
      containers:
        - name: myapp
          volumeMounts:
            - name: data
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
# [...]

It took me a little while to figure it out, because I was used to testing my StatefulSets on k3s. K3s uses a local path provisioner, which gives me local volumes, not hostPath ones like Minikube.

In production I don't need the initContainer to fix permissions since I'm deploying this on an EKS cluster.

Andy Wingo

@wingo

on hayek's bastards

After wrapping up a four-part series on free trade and the left, I thought I was done with neoliberalism. I had come to the conclusion that neoliberals were simply not serious people: instead of placing value in literally any human concern, they value only a network of trade, and as such, cannot say anything of value. They should be ignored in public debate; we can find economists elsewhere.

I based this conclusion partly on Quinn Slobodian’s Globalists (2020), which describes Friedrich Hayek’s fascination with cybernetics in the latter part of his life. But Hayek himself died before the birth of the WTO, NAFTA, all the institutions “we” fought in Seattle; we fought his ghost, living on past its time.

Well, like I say, I thought I was done, but then a copy of Slobodian’s Hayek’s Bastards (2025) arrived in the post. The book contests the narrative that the right-wing “populism” that we have seen in the last couple decades is an exogenous reaction to elite technocratic management under high neoliberalism, and that actually it proceeds from a faction of the neoliberal project. It’s easy to infer a connection when we look at, say, Javier Milei‘s background and cohort, but Slobodian delicately unpicks the weft to expose the tensile fibers linking the core neoliberal institutions to the alt-right. Tonight’s note is a book review of sorts.

after hayek

Let’s back up a bit. Slobodian’s argument in Globalists was that neoliberalism is not really about laissez-faire as such: it is a project to design institutions of international law to encase the world economy, to protect it from state power (democratic or otherwise) in any given country. It is paradoxical, because such an encasement requires state power, but it is what it is.

Hayek’s Bastards is also about encasement, but instead of protection from the state, the economy was to be protected from debasement by the unworthy. (Also there is a chapter on goldbugs, but that’s not what I want to talk about.)

The book identifies two major crises that push a faction of neoliberals to ally themselves with a culturally reactionary political program. The first is the civil rights movement of the 1960s and 1970s, together with decolonization. To put it crudely, whereas before, neoliberal economists could see themselves as acting in everyone’s best interest, having more black people in the polity made some of these white economists feel like their project was being perverted.

Faced with this “crisis”, at first the reactionary neoliberals reached out to race: the infant post-colonial nations were unfit to participate in the market because their peoples lacked the cultural advancement of the West. Already Globalists traced a line through Wilhelm Röpke‘s full-throated defense of apartheid, but the subjects of Hayek’s Bastards (Lew Rockwell, Charles Murray, Murray Rothbard, et al) were more subtle: instead of directly stating that black people were unfit to govern, Murray et al argued that intelligence was the most important quality in a country’s elite. It just so happened that they also argued, clothed in the language of evolutionary psychology and genetics, that black people are less intelligent than white people, and so it is natural that they not occupy these elite roles, that they be marginalized.

Before proceeding, three parentheses:

  1. Some words have a taste. Miscegenation tastes like the juice at the bottom of a garbage bag left out in the sun: to racists, because of the visceral horror they feel at the touch of the other, and to the rest of us, because of the revulsion the very idea provokes.

  2. I harbor an enmity to Silvia Plath because of The Bell Curve. She bears no responsibility; her book was The Bell Jar. I know this in my head but my heart will not listen.

  3. I do not remember the context, but I remember a professor in university telling me that the notion of “race” is a social construction without biological basis; it was an offhand remark that was new to me then, and one that I still believe now. Let’s make sure the kids now hear the good word now too; stories don’t tell themselves.

The second crisis of neoliberalism was the fall of the Berlin Wall: some wondered if the negative program of deregulation and removal of state intervention was missing a positive putty with which to re-encase the market. It’s easy to stand up on a stage with a chainsaw, but without a constructive program, neoliberal wins in one administration are fragile in the next.

The reactionary faction of neoliberalism’s turn to “family values” responds to this objective need, and dovetails with the reaction to the civil rights movement: to protect the market from the unworthy, neo-reactionaries worked to re-orient the discourse, and then state policy, away from “equality” and the idea that idea that We Should Improve Society, Somewhat. Moldbug’s neofeudalism is an excessive rhetorical joust, but one that has successfully moved the window of acceptable opinions. The “populism” of the AfD or the recent Alex Karp drivel is not a reaction, then, to neoliberalism, but a reaction by a faction of neoliberals to the void left after communism. (And when you get down to it, what is the difference between Moldbug nihilistically rehashing Murray’s “black people are low-IQ” and Larry Summers’ “countries in Africa are vastly UNDER-polluted”?)

thots

Slobodian shows remarkable stomach: his object of study is revolting. He has truly done the work.

For all that, Hayek’s Bastards left me with a feeling of indigestion: why bother with the racism? Hayek himself had a thesis of sorts, woven through his long career, that there is none of us that is smarter than the market, and that in many (most?) cases, the state should curb its hubris, step back, and let the spice flow. Prices are a signal, axons firing in an ineffable network of value, sort of thing. This is a good thesis! I’m not saying it’s right, but it’s interesting, and I’m happy to engage with it and its partisans.

So why do Hayek’s bastards reach to racism? My first thought is that they are simply not worthy: Charles Murray et al are intellectually lazy and moreover base. My lip curls to think about them in any serious way. I can’t help but recall the DARVO tactic of abusers; neo-reactionaries blame “diversity” for “debasing the West”, but it is their ignorant appeals to “race science” that is without basis.

Then I wonder: to what extent is this all an overworked intellectual retro-justification for something they wanted all along? When Mises rejoiced in the violent defeat of the 1927 strike, he was certainly not against state power per se; but was he for the market, or was he just against a notion of equality?

I can only conclude that things are confusing. “Mathematical” neoliberals exist, and don’t need to lean on racism to support their arguments. There are also the alt-right/neo-reactionaries, who grew out from neoliberalism, not in opposition to it: no seasteader is a partisan of autarky. They go to the same conferences. It is a baffling situation.

While it is all more the more reason to ignore them both, intellectually, Slobodian’s book shows that politically we on the left have our work set out for us both in deconstructing the new racism of the alt-right, and in advocating for a positive program of equality to take its place.

Casilda 1.2.4 Released!

I am very happy to announce a new version of Casilda!

A simple Wayland compositor widget for Gtk 4 originally created for Cambalache

This release comes with several new features, bug fixes and extra polish that it is making it start to feel like a proper compositor.

It all started with a quick 1.2 release to port it to wlroots 0.19 because 0.18 was removed from Debian, while doing this on my new laptop I was able to reproduce a texture leak crash which lead to 1.2.1 and a fix in Gtk by Benjamin to support Vulkan drivers that return dmabufs with less fd than planes.

At this point I was invested to I decided to fix the rest of issues in the backlog…

Update:

Cambalache 1.0.1 released with Casilda 1.2.4

Fractional scale

Casilda only supported integer scales not fractional scale so you could set your display scale to 200% but not 125%.

For reference this is how gtk4-demo looks like at 100% or scale 1 where 1 application/logical pixel corresponds to one device/display pixel.

*** Keep in mind its preferable to see all the following images without fractional scale itself and at full size ***

Clients would render at the next round scale if the application was started with a fractional scale set…

Or the client would render at scale 1 and look blurry if you switched from 1 to a fractional scale.

In both cases the input did not matched with the renderer window making the application really broken.

So if the client application draws a 4 logical pixel border, it will be 5 pixels in the backing texture this means that 1 logical pixel correspond to 1.25 device pixels. So in order for things to look sharp CasildaCompositor needs to make sure the coordinates it uses for position the client window will match to the device pixel grid.

My first attempt was to do

((int)x * scale) / scale

but that still looked blurry, and that is because I assumed window coordinate 0,0 was the same as its backing surface coordinates 0,0 but that is not the case because I forgot about the window shadow. Luckily there is API to get the offset, then all you have to do is add the logical position of the compositor widget and you get the surface origin coordinates

gtk_native_get_surface_transform (GTK_NATIVE (root), &surface_origin_x, &surface_origin_y);

/* Add widget offset */
if (gtk_widget_compute_point (self, GTK_WIDGET (root), &GRAPHENE_POINT_INIT (0, 0), &out_point))
  {
    surface_origin_x += out_point.x;
    surface_origin_y += out_point.y;
  }

Once I had that I could finally calculate the right position

/* Snap logical coordinates to device pixel grid */
if (scale > 1.0)
  {
    x = floorf ((x + surface_origin_x) * scale) / scale - surface_origin_x;
    y = floorf ((y + surface_origin_y) * scale) / scale - surface_origin_y;
  }

And this is how it looks now with 1.25 fractional scale.

Keyboard layouts

Another missing feature was support for different keyboard layouts so switching layouts would work on clients too. Not really important for Cambalache but definitely necessary for a generic compositor.

Popups positioners

Casilda now send clients all the necessary information for positioning popups in a place where they do not get cut out of the display area which is a nice thing to have.

Cursor shape protocol

Current versions of Gtk 4 requires cursor shape protocol on wayland otherwise it fallback to 32×32 pixel size cursors which might not be the same size of your system cursors and look blurry with fractional scales.

In this case the client send an cursor id instead of a pixel buffer when it wants to change the cursor.

This was really easy to implement as all I had to do is call

gtk_widget_set_cursor_from_name (compositor, wlr_cursor_shape_v1_name (event->shape));

Greetings

As usual this would not be possible without the help of the community, special thanks to emersion, Matthias and Benjamin for their help and support.

Release Notes

    • Add fractional scale support
    • Add viewporter support
    • Add support for cursor shape
    • Forward keyboard layout changes to clients.
    • Improve virtual size calculation
    • Fix maximized/fullscreen auto resize on compositor size allocation
    • Add support for popups reposition
    • Fix GdkTexture leak

Fixed Issues

    • #5 “Track keymap layout changes”
    • #12 “Support for wlroots-0.19”
    • #13 “Wrong cursor size on client windows”
    • #14 “Support for fractional scaling snap to device grid”
    • #19 Add support for popups reposition
    • #16 Firefox GTK backdrop/shadow not scaled correctly

Where to get it?

Source code lives on GNOME gitlab here

git clone https://gitlab.gnome.org/jpu/casilda.git

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Mastodon

Follow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.

Happy coding!

Matthias Klumpp

@ximion

Hello old new “Projects” directory!

If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”

Why?

With the recent 0.20 release of xdg-user-dirs we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a more than 11 year old bug report that asked for this feature.

The purpose of the Projects directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.

By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.

This sucks, I don’t like it!

As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder, simply delete it! The xdg-user-dirs utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your ~/.config/user-dirs.dirs configuration file.

If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the /etc/xdg/user-dirs.defaults file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).

What else is new?

Besides this change, the 0.20 release of xdg-user-dirs brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions here for the xdg-user-dirs utility, by replacing the shell script with a C binary.

Thanks to everyone who contributed to this release!

GNOME GitLab Git traffic caching

Table of Contents

Introduction

One of the most visible signs that GNOME’s infrastructure has grown over the years is the amount of CI traffic that flows through gitlab.gnome.org on any given day. Hundreds of pipelines run in parallel, most of them starting with a git clone or git fetch of the same repository, often at the same commit. All that traffic was landing directly on GitLab’s webservice pods, generating redundant load for work that was essentially identical.

GNOME’s infrastructure runs on AWS, which generously provides credits to the project. Even so, data transfer is one of the largest cost drivers we face, and we have to operate within a defined budget regardless of those credits. The bandwidth costs associated with this Git traffic grew significant enough that for a period of time we redirected unauthenticated HTTPS Git pulls to our GitHub mirrors as a short-term cost mitigation. That measure bought us some breathing room, but it was never meant to be permanent: sending users to a third-party platform for what is essentially a core infrastructure operation is not a position we wanted to stay in. The goal was always to find a proper solution on our own infrastructure.

This post documents the caching layer we built to address that problem. The solution sits between the client and GitLab, intercepts Git fetch traffic, and routes it through Fastly’s CDN so that repeated fetches of the same content are served from cache rather than generating a fresh pack every time. The design went through several iterations — this post presents the final architecture first, then walks through how we got here for readers interested in the evolution.

The problem

The Git smart HTTP protocol uses two endpoints: info/refs for capability advertisement and ref discovery, and git-upload-pack for the actual pack generation. The second one is the expensive one. When a CI job runs git fetch origin main, GitLab has to compute and send the entire pack for that fetch negotiation. If ten jobs run the same fetch within a short window, GitLab does that work ten times.

The tricky part is that git-upload-pack is a POST request with a binary body that encodes what the client already has (have lines) and what it wants (want lines). Traditional HTTP caches ignore POST bodies entirely. Building a cache that actually understands those bodies and deduplicates identical fetches requires some work at the edge.

For a fresh clone the body contains only want lines — one per ref the client is requesting:

0032want 7d20e995c3c98644eb1c58a136628b12e9f00a78
0032want 93e944c9f728a4b9da506e622592e4e3688a805c
0032want ef2cbad5843a607236b45e5f50fa4318e0580e04
...

For an incremental fetch the body is a mix of want lines (what the client needs) and have lines (commits the client already has locally), which the server uses to compute the smallest possible packfile delta:

00a4want 51a117587524cbdd59e43567e6cbd5a76e6a39ff
0000
0032have 8282cff4b31dce12e100d4d6c78d30b1f4689dd3
0032have be83e3dae8265fdc4c91f11d5778b20ceb4e2479
0032have 7d46abdf9c5a3f119f645c8de6d87efffe3889b8
...

The leading four hex characters on each line are the pkt-line length prefix. The server walks back through history from the wanted commits until it finds a common ancestor with the have set, then packages everything in between into a packfile. Two CI jobs running the same pipeline at the same commit will produce byte-for-byte identical request bodies and therefore identical responses — exactly the property a cache can help with.

Architecture overview

The current architecture has three components:

  • Fastly as the user-facing CDN for gitlab.gnome.org, with custom VCL that intercepts git-upload-pack traffic, hashes the request body, converts the POST to a GET, and caches the response at edge POPs worldwide
  • OpenResty (Nginx + LuaJIT) running as the origin server, with a Lua script that restores the original POST, checks a Valkey denylist for private repositories, and signals cacheability back to Fastly
  • Valkey + webhook — a small Valkey instance stores a denylist of private repository paths, kept in sync by a webhook service that listens for GitLab project visibility changes
flowchart TD
client["Git client / CI runner"]
edge["Fastly Edge POP (nearest)"]
shield["Fastly Shield POP (IAD)"]
nginx["OpenResty Nginx (origin)"]
lua["Lua: git_upload_pack.lua"]
valkey["Valkey denylist"]
gitlab["GitLab webservice"]
webhook["gitlab-git-cache-webhook"]
gitlab_events["GitLab project events"]
client -- "POST /git-upload-pack" --> edge
edge -- "HIT → serve from edge" --> client
edge -- "MISS → forward to shield" --> shield
shield -- "HIT → return to edge (edge caches)" --> edge
shield -- "MISS → fetch from origin" --> nginx
nginx --> lua
lua -- "authenticated? check denylist" --> valkey
lua -- "denied/error: keep auth, skip cache" --> gitlab
lua -- "allowed: keep auth, signal cacheable" --> gitlab
gitlab -- "packfile response" --> nginx
nginx -- "X-Git-Cacheable: 1 (if allowed)" --> shield
gitlab_events --> webhook
webhook -- "SET/DEL git:deny:" --> valkey

The request flow:

  1. The POST /git-upload-pack arrives at the nearest Fastly edge POP.
  2. VCL checks the body: if Content-Length exceeds 8 KB (the limit of what Fastly can read from req.body), or the body does not contain command=fetch, the request is passed through uncached.
  3. VCL hashes the body with SHA256 to build the cache key, base64-encodes the body into X-Git-Original-Body, and converts the request to GET. If the request carries authentication headers (Authorization, PRIVATE-TOKEN, Job-Token), VCL sets X-Git-Auth-Passthrough to flag it — but the request still enters the cache lookup.
  4. On a cache hit at the edge, the packfile is served immediately — regardless of whether the request is authenticated or not.
  5. On a miss, the request routes to the IAD shield POP. If the shield has it cached, it returns the object and the edge caches it locally.
  6. On a shield miss, the request reaches Nginx at the origin. Lua detects X-Git-Original-Body and restores the POST body. If X-Git-Auth-Passthrough is set, Lua checks the Valkey denylist: if the repo is private (or Valkey is unreachable), the Authorization header is preserved and cacheability is not signaled — the response passes through uncached. If the repo is not on the denylist, Authorization is preserved (internal repos need it for GitLab to return 200) and cacheability is signaled.
  7. For unauthenticated requests (no passthrough flag), Lua strips Authorization and signals cacheability unconditionally — these are by definition accessing public repositories.
  8. The response flows back through the shield and the edge. If X-Git-Cacheable: 1 is present, both nodes cache the response. Subsequent requests — authenticated or not — for the same cache key are served directly from cache.

The VCL layer

The vcl_recv snippet runs at priority 9, before the existing enable_segmented_caching snippet at priority 10 which would otherwise return(pass) for non-asset URLs:

# Snippet git-cache-vcl-recv : 9
# Edge: convert POST to GET, hash body, encode body in header
if (req.url ~ "/git-upload-pack$" && req.request == "POST") {
if (std.atoi(req.http.Content-Length) > 8192) {
return(pass);
}
if (req.body !~ "command=fetch") {
return(pass);
}
set req.http.X-Git-Cache-Key = "v3:" digest.hash_sha256(req.body);
set req.http.X-Git-Original-Body = digest.base64(req.body);
# Flag authenticated requests — they still enter the cache lookup,
# but on a miss Lua uses this to decide whether to cache the response
if (req.http.Authorization || req.http.PRIVATE-TOKEN || req.http.Job-Token) {
set req.http.X-Git-Auth-Passthrough = "1";
}
set req.request = "GET";
set req.backend = F_Host_1;
if (req.restarts == 0) {
set req.backend = fastly.try_select_shield(ssl_shield_iad_va_us, F_Host_1);
}
return(lookup);
}
# Shield: request already converted to GET by the edge
if (req.http.X-Git-Cache-Key) {
set req.backend = F_Host_1;
return(lookup);
}

Authenticated requests — CI runners with Authorization: Basic <gitlab-ci-token:TOKEN>, API clients with PRIVATE-TOKEN or Job-Token — are no longer sent straight to origin. Instead, VCL flags them with X-Git-Auth-Passthrough and lets them enter the cache lookup. On a cache hit, the packfile is served directly from the edge — no origin contact, no credential validation needed, because the cached object can only exist if a previous request already established that the repository is public (see Protecting private repositories). On a cache miss, the flagged request reaches origin where Lua checks the Valkey denylist to decide whether the response should be cached.

The command=fetch filter means only Git protocol v2 fetch commands are cached. The ls-refs command is excluded because its request body is essentially static — caching it with a long TTL would serve stale ref listings after a push. Fetch bodies encode exactly the SHAs the client wants and already has, making them safe to cache indefinitely.

The v3: prefix is a cache version string. Bumping it invalidates all existing cache entries without touching Fastly’s purge API.

The second if block handles the shield. When a cache miss at the edge forwards the request to the shield POP, the shield runs vcl_recv again. At that point the request is already a GET (the edge converted it), so the first block’s req.request == "POST" check will not match. Without the second block, the request would fall through to the enable_segmented_caching snippet, which returns pass for any URL that is not an artifact or archive — effectively preventing the shield from ever caching git traffic.

The vcl_hash snippet overrides the default URL-based hash when a cache key is present:

# Snippet git-cache-vcl-hash : 10
if (req.http.X-Git-Cache-Key) {
set req.hash += req.http.X-Git-Cache-Key;
return(hash);
}

The vcl_fetch snippet caches 200 responses that carry the X-Git-Cacheable signal from Nginx:

# Snippet git-cache-vcl-fetch : 100
if (req.http.X-Git-Cache-Key) {
if (beresp.status == 200 && beresp.http.X-Git-Cacheable == "1") {
set beresp.http.Surrogate-Key = "git-cache " regsub(req.url.path, "/git-upload-pack$", "");
set beresp.cacheable = true;
set beresp.ttl = 30d;
set beresp.http.X-Git-Cache-Key = req.http.X-Git-Cache-Key;
unset beresp.http.Cache-Control;
unset beresp.http.Pragma;
unset beresp.http.Expires;
unset beresp.http.Set-Cookie;
return(deliver);
}
set beresp.ttl = 0s;
set beresp.cacheable = false;
return(deliver);
}

The Surrogate-Key line tags each cached object with both a global git-cache key and the repository path. This enables targeted purging — a single repository’s cache can be flushed with fastly purge --key "/GNOME/glib", or all git cache at once with fastly purge --key "git-cache".

The 30-day TTL is deliberately long. Git pack data is content-addressed: a pack for a given set of want/have lines will always be the same. As long as the objects exist in the repository, the cached pack is valid. The only case where a cached pack could be wrong is if objects were deleted (force-push that drops history, for instance), which is rare and, on GNOME’s GitLab, made even rarer by the Gitaly custom hooks we run to prevent force-pushes and history rewrites on protected namespaces. In those cases the cache version prefix would force a key change rather than relying on TTL expiry.

The X-Git-Cacheable header is intentionally not unset in vcl_fetch. This is important for the shielding architecture: when the shield caches the object, the stored headers include X-Git-Cacheable: 1. When the edge later fetches this object from the shield, the edge’s own vcl_fetch sees the header and knows it is safe to cache locally. If vcl_fetch stripped the header, the edge would never cache — every request would be a local miss that has to travel back to the shield.

The cleanup happens in vcl_deliver, which runs last before the response reaches the client:

# Snippet git-cache-vcl-deliver : 100
if (req.http.X-Git-Cache-Key) {
set resp.http.X-Git-Cache-Status = if(fastly_info.state ~ "HIT(?:-|\z)", "HIT", "MISS");
unset resp.http.X-Git-Original-Body;
if (!req.http.Fastly-FF) {
unset resp.http.X-Git-Cacheable;
unset resp.http.X-Git-Cache-Key;
}
}

The Fastly-FF check distinguishes between inter-POP traffic (shield-to-edge) and the final client response. Fastly-FF is set when the request comes from another Fastly node. On the shield, where the request came from the edge, internal headers like X-Git-Cacheable and X-Git-Cache-Key are preserved — the edge’s vcl_fetch needs them. On the edge, where the request came from the actual client, those headers are stripped from the final response. Only X-Git-Cache-Status is exposed to clients for observability.

The POST-to-GET conversion

This is probably the most unusual part of the design. Fastly’s consistent hashing and shield routing only works for GET requests. POST requests always go straight to origin. Fastly does provide a way to force POST responses into the cache — by returning pass in vcl_recv and setting beresp.cacheable in vcl_fetch — but it is a blunt instrument: there is no consistent hashing, no shield collapsing, and no guarantee that two nodes in the same POP will ever share the cached result.

By converting the POST to a GET in VCL, encoding the body in a header (X-Git-Original-Body), and using a body-derived SHA256 as the cache key, we get consistent hashing and shield-level request collapsing for free. The VCL uses the X-Git-Cache-Key header (not the URL or method) as the cache key, so the GET conversion is invisible to the caching logic.

Fastly’s shield feature routes cache misses through a designated shield node before going to origin. When two different edge nodes both get a MISS for the same cache key simultaneously, the shield node collapses them into a single origin request. This is important because without it, a burst of CI jobs fetching the same commit would all miss, all go to origin in parallel, and GitLab would end up generating the same pack multiple times.

Protecting private repositories

Private repository traffic must never be cached — that would mean storing authenticated git content in a third-party cache and serving it to arbitrary clients. The protection relies on two independent layers.

Layer 1: cache population is restricted. The cache can only be populated when Lua signals cacheability via X-Git-Cacheable: 1. Lua only signals cacheability when the request is either unauthenticated (by definition accessing a public repo) or authenticated for a repo that is not on the Valkey denylist. For private repos, Lua does not signal cacheability, so vcl_fetch sets ttl=0 and cacheable=false — the response is delivered but never stored.

Layer 2: the Valkey denylist. A webhook service listens for GitLab project_create and project_update system hooks. When a project’s visibility is set to private (level 0), the webhook sets a git:deny:<path> key in Valkey. When visibility changes to internal (level 10) or public (level 20), the key is removed. A periodic reconciliation job (reconcile.py) syncs the full denylist against the GitLab API to correct any drift from missed events.

On a cache miss for an authenticated request, Lua checks the denylist:

  • Repo is on the denylist (private): Authorization is preserved, cacheability is not signaled. The request proxies to GitLab with credentials intact, GitLab validates the token, the response is returned but never cached.
  • Repo is not on the denylist (public/internal): Authorization is preserved (internal repos require it for GitLab to return 200), cacheability is signaled. The response is cached for future requests.
  • Valkey is unreachable or returns an error: treated the same as denied — Authorization is preserved, cacheability is not signaled. This fail-closed design means infrastructure failures result in cache misses, never in data leaks.

The denylist only needs to track private repositories, which are a small fraction of the total on GNOME’s GitLab instance. A private repo’s packfile can never enter the cache through two independent mechanisms: the denylist prevents Lua from signaling cacheability, and even if the denylist were somehow wrong, an unauthenticated request to a private repo returns a 401 from GitLab — which vcl_fetch does not cache (it only caches 200 + X-Git-Cacheable).

The Lua layer

With the VCL handling body hashing, the POST-to-GET conversion, and the cache lookup for all requests, the Lua script runs on cache misses that reach origin. Both authenticated and unauthenticated requests can arrive here. The script’s responsibilities are:

  1. Detect that the request arrived from Fastly with an encoded body (the X-Git-Original-Body header).
  2. Decode and restore the original POST.
  3. For authenticated requests, check the Valkey denylist to determine if the repository is private.
  4. Signal back to Fastly whether the response is safe to cache.
local redis_helper = require("redis_helper")

local redis_host = os.getenv("REDIS_HOST")
local redis_port = os.getenv("REDIS_PORT")

local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
if not encoded_body then
 return
end

local body = ngx.decode_base64(encoded_body)
ngx.req.read_body()
ngx.req.set_method(ngx.HTTP_POST)
ngx.req.set_body_data(body)
ngx.req.set_header("Content-Length", tostring(#body))
ngx.req.clear_header("X-Git-Original-Body")

if ngx.req.get_headers()["X-Git-Auth-Passthrough"] then
 ngx.req.clear_header("X-Git-Auth-Passthrough")

 local uri = ngx.var.uri
 local repo_path = uri:match("^/(.+)/git%-upload%-pack$")
 if repo_path then
 repo_path = repo_path:gsub("%.git$", "")
 end

 local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path)

 if err then
 ngx.log(ngx.WARN, "git-cache: Redis error for ", repo_path, ": ", err,
 " — keeping auth, skipping cache")
 end

 if err or denied then
 return
 end

 ngx.ctx.git_cacheable = true
else
 ngx.req.clear_header("Authorization")
 ngx.ctx.git_cacheable = true
end

The two branches handle the authenticated and unauthenticated paths. When X-Git-Auth-Passthrough is present, the request came from a CI runner or API client. Lua checks the denylist: if the repo is private or Valkey is unreachable, the script returns early — Authorization stays on the request (so GitLab can validate it), and git_cacheable is never set (so the response is not cached). If the repo is not denied, Authorization is preserved and cacheability is signaled. The Authorization header is kept rather than stripped because internal repositories (visibility level 10) require authentication for git operations — stripping it would cause GitLab to return a 401. Public repos work with or without credentials, so keeping the header is safe for both.

For unauthenticated requests (no passthrough flag), Authorization is stripped and cacheability is signaled unconditionally — these are by definition accessing public repositories.

The early return for denied or errored lookups is the fail-closed behavior. The request still proxies to GitLab (the proxy_pass directive in the Nginx location block runs after Lua), but without the cacheable signal, vcl_fetch will not store the response.

The ngx.ctx.git_cacheable flag is picked up by the header_filter_by_lua_block in the Nginx configuration, which translates it into the X-Git-Cacheable: 1 response header that vcl_fetch checks:

location ~ /git-upload-pack$ {
 client_body_buffer_size 5m;
 client_max_body_size 5m;

 access_by_lua_file /etc/nginx/lua/git_upload_pack.lua;

 header_filter_by_lua_block {
 if ngx.ctx.git_cacheable then
 ngx.header["X-Git-Cacheable"] = "1"
 end
 }

 proxy_pass http://gitlab-webservice;
 ...
}

Debugging the rollout

The rollout surfaced a few issues worth documenting for anyone building a similar setup on Fastly.

Shielding introduces a second vcl_recv execution. When the edge forwards a cache miss to the shield, the shield runs the entire VCL pipeline from scratch. The POST-to-GET conversion in vcl_recv checks for req.request == "POST", but on the shield the request is already a GET. Without the fallback if (req.http.X-Git-Cache-Key) block, the shield’s vcl_recv would fall through to the segmented caching snippet and return(pass) — making the shield unable to cache anything.

Response headers must survive the shield-to-edge hop. vcl_fetch and vcl_deliver both run on each node independently. If vcl_fetch on the shield strips a header after caching the object, the stored object will not have that header. When the edge fetches from the shield, the edge’s vcl_fetch will not see it. The solution is to only strip internal headers in vcl_deliver on the final client response, using Fastly-FF to distinguish inter-POP traffic from client traffic.

Fastly’s req.body is limited to 8 KB. VCL can only inspect the first 8192 bytes of a request body. For the vast majority of git fetch negotiations — especially shallow clones and CI pipelines fetching recent commits — the body is well under this limit. Requests with larger bodies (deep fetches with many have lines) fall through to return(pass) and are handled directly by GitLab without caching. This is an acceptable tradeoff: those large-body requests are typically unique negotiations that would not benefit from caching anyway.

Git protocol v1 clients are not cached. The VCL filters on command=fetch, which is a Git protocol v2 construct. Protocol v1 uses a different body format (want/have lines without the command= prefix). Since protocol v2 has been the default since git 2.26 (March 2020), the vast majority of traffic benefits from caching. Protocol v1 clients still work correctly — they simply bypass the cache.

Internal repositories require authentication for git operations. An early version of the Lua script stripped Authorization for any repo not on the denylist, assuming that “not private” meant “accessible without credentials.” Internal repositories (visibility level 10) are not on the denylist — their content is not sensitive — but GitLab still requires authentication for git clone/fetch operations on them. Stripping credentials produced a 401 from GitLab. The fix was to preserve Authorization for all authenticated requests that pass the denylist check, regardless of whether the repo is public or internal. Public repos accept the header harmlessly; internal repos require it.

How we got here

The current architecture is the result of two iterations. The sections above describe the final design; this section documents the path we took to get there.

Iteration 1: Separate CDN service with Lua-driven caching

The first version used a separate Fastly CDN service (cdn.gitlab.gnome.org) as the cache layer, with Nginx doing most of the heavy lifting in Lua:

flowchart TD
client["Git client / CI runner"]
gitlab_gnome["gitlab.gnome.org (Nginx reverse proxy)"]
nginx["OpenResty Nginx"]
lua["Lua: git_upload_pack.lua"]
cdn_origin["/cdn-origin internal location"]
fastly_cdn["Fastly CDN"]
origin["gitlab.gnome.org via its origin (second pass)"]
gitlab["GitLab webservice"]
valkey["Valkey denylist"]
webhook["gitlab-git-cache-webhook"]
gitlab_events["GitLab project events"]
client --> gitlab_gnome
gitlab_gnome --> nginx
nginx --> lua
lua -- "check denylist" --> valkey
lua -- "private repo: BYPASS" --> gitlab
lua -- "public/internal: internal redirect" --> cdn_origin
cdn_origin --> fastly_cdn
fastly_cdn -- "HIT" --> cdn_origin
fastly_cdn -- "MISS: origin fetch" --> origin
origin --> gitlab
gitlab_events --> webhook
webhook -- "SET/DEL git:deny:" --> valkey

In this design, the Lua script did everything: read the POST body, SHA256-hash it to build a cache key, check a Valkey denylist to exclude private repositories, convert the POST to a GET, encode the body in a header, and perform an internal redirect to a /cdn-origin location that proxied to the CDN. On a cache miss, the CDN would fetch from gitlab.gnome.org directly (the “second pass”), where Lua would detect the origin fetch, decode the body, restore the POST, and proxy to GitLab.

Private repositories were protected by a denylist stored in Valkey. A small FastAPI webhook service (gitlab-git-cache-webhook) listened for GitLab system hooks on project_create and project_update events, maintaining git:deny:<path> keys for private repositories (visibility level 0). Internal repositories (level 10) were treated the same as public (level 20) since they are accessible to any authenticated user on the instance.

The Lua script for this design was substantially more complex:

local resty_sha256 = require("resty.sha256")
local resty_str = require("resty.string")
local redis_helper = require("redis_helper")

local redis_host = os.getenv("REDIS_HOST") or "localhost"
local redis_port = os.getenv("REDIS_PORT") or "6379"

-- Second pass: request arriving from CDN origin fetch.
if ngx.req.get_headers()["X-Git-Cache-Internal"] then
 local encoded_body = ngx.req.get_headers()["X-Git-Original-Body"]
 if encoded_body then
 ngx.req.read_body()
 local body = ngx.decode_base64(encoded_body)
 ngx.req.set_method(ngx.HTTP_POST)
 ngx.req.set_body_data(body)
 ngx.req.set_header("Content-Length", tostring(#body))
 ngx.req.clear_header("X-Git-Original-Body")
 end
 return
end

And on the first pass, it handled hashing, denylist checks, and the CDN redirect:

if not body:find("command=fetch", 1, true) then
 ngx.header["X-Git-Cache-Status"] = "BYPASS"
 return
end

local sha256 = resty_sha256:new()
sha256:update(body)
local body_hash = resty_str.to_hex(sha256:final())
local cache_key = "v2:" .. repo_path .. ":" .. body_hash

local denied, err = redis_helper.is_denied(redis_host, redis_port, repo_path)
if denied then return end

ngx.req.clear_header("Authorization")
ngx.req.set_header("X-Git-Original-Body", ngx.encode_base64(body))
ngx.req.set_method(ngx.HTTP_GET)
ngx.req.set_body_data("")
return ngx.exec("/cdn-origin" .. uri)

The CDN’s VCL was relatively simple — it used X-Git-Cache-Key for the hash, routed through a shield, and cached 200 responses for 30 days.

This architecture worked, but it had two significant limitations that led to the current design.

Iteration 2: Edge caching with CI runner participation

The first problem with the separate CDN service was geographic. Nginx runs in AWS us-east-1, so from Fastly’s perspective the only client of the CDN was that single instance in Virginia. Every request entered through the IAD POP, which meant the CDN’s edge POPs around the world were never populated. A CI runner in Europe would have its request travel from a European Fastly POP to IAD, then to Nginx, then back to Fastly IAD, and then all the way back — crossing the Atlantic twice for every cache miss.

The fix was to eliminate the separate CDN service and move all the caching logic into the gitlab.gnome.org Fastly service itself. The key insight was that the POST-to-GET conversion and body hashing could happen in Fastly’s VCL rather than in Lua — Fastly provides digest.hash_sha256() and digest.base64() functions that operate directly on req.body. By doing the conversion at the CDN edge, every POP in the network became a potential cache node for git traffic.

The second problem was that the original denylist approach had two flaws. First, its error handling was fail-open: a Valkey connection error would cause the Lua script to assume the repo was public and strip credentials — the wrong default. Second, even after briefly replacing the denylist with a simple VCL auth bypass (return(pass) for any request with Authorization), CI runners were left completely uncached. GitLab CI always injects a CI_JOB_TOKEN into every job, and the runner authenticates with Authorization: Basic <gitlab-ci-token:TOKEN> regardless of whether the repository is public or private. With the auth bypass, every CI clone skipped the cache entirely — safe, but it left the biggest source of redundant traffic unserved.

The current design solves both problems. VCL flags authenticated requests with X-Git-Auth-Passthrough instead of bypassing the cache, letting them participate in cache lookups. On a hit, the cached packfile is served immediately. On a miss, the request reaches Lua at origin, where the flag triggers a denylist check against Valkey — the same denylist and webhook infrastructure from iteration 1, re-deployed with one critical change: fail-closed error handling. A Valkey error or missing connection causes Lua to preserve Authorization and skip cacheability signaling. The request still works (GitLab validates the token and serves the packfile), but the response is not cached. Infrastructure failures result in cache misses, never in data leaks.

The denylist only tracks private repositories (visibility level 0), which are a small fraction of the total on GNOME’s GitLab. Public and internal repositories pass the denylist check, and Lua signals cacheability while preserving the Authorization header — internal repos require it for GitLab to return 200, and public repos accept it harmlessly.

Conclusions

The system has been running in production since April 2026 and has gone through two iterations to reach its current form. Packfiles are cached at Fastly edge POPs worldwide — a CI runner in Europe gets a cache hit served from a European POP rather than making a round trip to the US East coast.

The moving parts are Fastly’s VCL, an OpenResty Nginx instance with a ~30-line Lua script, a Valkey instance storing the private repository denylist, and a small webhook service that keeps the denylist synchronized with GitLab. Private repositories are protected by two independent layers: the Valkey denylist (which prevents cacheability signaling) and GitLab’s own authentication (which rejects unauthenticated access).

If something goes wrong with the cache layer, requests fall through to GitLab directly — the same path they took before caching existed. There is no failure mode where caching breaks git operations. This also means we don’t redirect any traffic to github.com anymore.

That should be all for today, stay tuned!

Jussi Pakkanen

@jpakkane

Multi merge sort, or when optimizations aren't

In our previous episode we wrote a merge sort implementation that runs a bit faster than the one in stdlibc++. The question then becomes, could it be made even faster. If you go through the relevant literature one potential improvement is to do a multiway merge. That is, instead of merging two arrays into one, you merge four into one using, for example, a priority queue.

This seems like a slam dunk for performance.

  • Doubling the number of arrays to merge at a time halves the number of total passes needed
  • The priority queue has a known static maximum size, so it can be put on the stack, which is guaranteed to be in the cache all the time
  • Processing an element takes only log(#lists) comparisons
Implementing multimerge was conceptually straightforward but getting all the gritty details right took a fair bit of time. Once I got it working the end result was slower. And not by a little, either, but more than 30% slower. Trying some optimizations made it a bit faster but not noticeably so.

Why is this so? Maybe there are bugs that cause it to do extra work? Assuming that is not the case, what actually is? Measuring seems to indicate that a notable fraction of the runtime is spent in the priority queue code. Beyond that I got very little to nothing.

The best hypotheses I could come up with has to with the number of comparisons made. A classical merge sort does two if statements per output elements. One to determine which of the two lists has a smaller element at the front and one to see whether removing the element exhausted the list. The former is basically random and the latter is always false except when the last element is processed. This amounts to 0.5 mispredicted branches per element per round.

A priority queue has to do a bunch more work to preserve the heap property. The first iteration needs to check the root and its two children. That's three comparisons for value and two checks whether the children actually exist. Those are much less predictable than the comparisons in merge sort. Computers are really efficient at doing simple things, so it may be that the additional bookkeeping is so expensive that it negates the advantage of fewer rounds.

Or maybe it's something else. Who's to say? Certainly not me. If someone wants to play with the code, the implementation is here. I'll probably delete it at some point as it does not have really any advantage over the regular merge sort.

Steven Deobald

@steven

End of 10 Handout

There was a silly little project I’d tried to encourage many folks to attempt last summer. Sri picked it up back in September and after many months, I decided to wrap it up and publish what’s there.

The intention is a simple, 2-sided A4 that folks can print and give out at repair cafes, like the End of 10 event series. Here’s the original issue, if you’d like to look at the initial thought process.

When I hear fairly technical folks talk about Linux in 2026, I still consistently hear things like “I don’t want to use the command line.” The fact that Spotify, Discord, Slack, Zoom, and Steam all run smoothly on Linux is far removed from these folks’ conception of the Linux desktop they might have formed back in 2009. Most people won’t come to Linux because it’s free of ✨shlop✨ and ads — they’re accustomed to choking on that stuff. They’ll come to Linux because they can open a spreadsheet for free, play Slay The Spire 2, or install Slack even though they promised themselves they wouldn’t use their personal computer for work.

The GNOME we all know and love is one we take for granted… and the benefits of which we assume everyone wants. But the efficiency, the privacy, the universality, the hackability, the gorgeous design, and the lack of ads? All these things are the icing on the cake. The cake, like it or not, is installing Discord so you can join the Sunday book club.

Here’s the A4. And here’s a snippet:

 

An A4 snippet including "where's the start menu?", "where are my exes?", and "how do I install programs?"

 

If you try this out at a local repair cafe, I’d love to know which bits work and which don’t. Good luck! ❤

 

Sjoerd Stendahl

@sstendahl

Announcing the upcoming Graphs 2.0

It’s been a while since we last shared a major update of Graphs. We’ve had a few minor releases, but the last time we had a substantial feature update was over two years ago.

This does not mean that development has stalled, to the contrary. But we’ve been working hard on some major changes that took some time to get completely right. Now after a long development cycle, we’re finally getting close enough to a release to be able to announce an official beta period. In this blog, I’ll try to summarize most of the changes in this release.

New data types

In previous version of Graphs, all data types are treated equally. This means that an equation is actually just regular data that is generated when loading. Which is fine, but it also means that the span of the equation is limited, the equation cannot be changed afterward, and operations on the equation will not be reflected in the equation name. In Graphs 2.0, we have three distinct data types: Datasets, Generated Datasets and Equations.

Datasets are the regular, imported data that you all know and love. Nothing really has changed here. Generated Datasets are essentially the same as regular datasets, but the difference is that these datasets are generated from an equation. They work the same as regular datasets, but for generated datasets you can change the equation, step size and the limits after creating the item. Finally, the major new addition is the concept of equations. As the name implies, equations are generated based on an equation you enter, but they span an infinite range. Furthermore, operations you perform on equations are done analytically. Meaning if you translate the equation `y = 2x + 3` with 3 in the y-direction, it will change to `y = 2x + 6`. If you perform a derivative, the equation will change to `y = 2x` etcetera. This is a long-requested feature, and has been made possible thanks to the magic of sympy and some trickery on the canvas. Below, there’s a video that demonstrates these three data types.

Revamped Style Editor

We have redesigned the style editor, where we now show a live preview of the edited styles. This has been a pain point in the past, when you edit styles you cannot see how it actually affects the canvas. Now the style editor immediately tells you how it will affect a canvas, making it much easier to change the style exactly to your preferences.

We have also added the ability to import styles. Since Graphs styles are based on matplotlib styles, most features from a matplotlib style generally work. Similarly, you can now export your styles as well making it easier to share your style or simply to send it to a different machine. Finally, the style editor can be opened independently of Graphs. By opening a Graphs style from your file explorer, you can change the style without having to open Graphs.

We also added some new options, such as the ability to style the new error bars. But also the option to draw tick labels (so the values) on all axes that have ticks.

A screenshot of the Graphs style editor, on the left you can see the different settings as in the previous version. On the right you can see the live preview
The revamped style editor

Improved data import

We have completely reworked the way data is imported. Under the hood, our modules are completely modular making it possible to add new parsers without having to mess with the code. Thanks to this rework, we have added support for spreadsheets (LibreOffice .ods and Microsoft Office .xlxs) and for sqlite databases files. The UI automatically updates accordingly. For example for spreadsheets, columns are imported by the column name (alphabetical letter) instead of an index, while sqlite imports show the tables present in the database.

The new import dialog for Graphs. You can see how multiple different types of items are about the be imported, as well as new settings
The new import dialog

Furthermore, the import dialog has been improved. It is not possible to add multiple files at once, or import multiple datasets from the same file. Settings can be adjusted for each dataset individually. And you can even import just from a single column. We also added the ability to import error-bars on either axes, and added some pop-up buttons that explain certain settings.

Error bars

I mentioned this in the previous paragraph, but as it’s a feature that’s been requested multiple times I thought it’d be good to state this explicitly as well. We now added support for error bars. Error bars can easily be set on the import dialog, and turned on and off for each axis when editing the item.

Singularity handling

The next version of Graph will also finally handle singularities properly, so equations that have infinite values in them will be rendered as they should be. What was happening in the old version, was that for equations with values that go to infinity and then flip sign, that the line was drawn from the maximum value to the minimum value. Even though there are no values in between. Furthermore, since we render a finite amount of datapoints, the lines don’t go up to infinity either, giving misleading Graphs.

This is neatly illustrated in the pictures below. The values go all the way up to infinity like they should, and Graphs neatly knows that the line is not continuous, so it does not try to draw a straight line going from plus to minus infinity.

The old version of Graphs trying to render tan(x). Lines don't go all the way to plus/minus infinity, and they also draw a line between the high and low values.
The old version of Graphs trying to render tan(x). Lines don’t go all the way to plus/minus infinity, and they also draw a line between the high and low values.
The upcoming version of Graphs, were equations such as tan(x) are drawn properly.

Reworked Curve fitting

The curve fitting has been reworked completely under the hood. While the changes may not be that obvious as a user, the code has basically been completely replaced. The most important change is that the confidence band is now calculated completely correctly using the delta-method. Previously a naive approach was used where the limits were calculated using the standard deviation each parameter. This does not hold up well in most cases though. The parameter values that are given are also no longer rounded in the new equation names (e.g. 421302 used to be rounded to 421000). More useful error messages are provided when things go wrong, custom equations now have an apply button which improves smoothness when entering new equations, the root mean squared error is added as a second goodness-of-fit measure, you can now check out the residuals of your fit. The residuals can be useful to check if your fit is physically correct. A good fit will show residuals scattered randomly around zero with no visible pattern. A systematic pattern in the residuals, such as a curve or a trend suggests that the chosen model may not be appropriate for the data.

The old version of Graphs with the naive calculation of the confidence band
The new version of Graphs with the proper calculation of the confidence band.

UI changes

We’ve tweaked the UI a bit all over the place. But one particular change that is worth to highlight, is that we have moved the item and figure settings to the sidebar. The reason for this, is that the settings are typically used to affect the canvas so you don’t want to lose sight of how your setting affects the canvas while you’re updating. For example, when setting the axes limits, you want to see how your graph looks with the new limit, having a window obstructing the view does not help.

Another nice addition is that you can now simply click on a part of the canvas, such as the limits, and it will immediately bring you to the figure settings with the relevant field highlighted. See video below.

Mobile screen support

With the upcoming release, we finally have full support for mobile devices. See here a quick demonstration on an old OnePlus 6:

Figure exporting

One nice addition is the improved figure export. Instead of simply taking the same canvas as you see on the screen, you can now explicitly set a certain resolution. This is vital if you have a lot of figures in the same work, or need to publish your figures in academic journals, and you need consistency both in size and in font sizes. Of course, you can still use the previous setting and have the same size as in the application.

The new export figure dialog

More quality of life changes

The above are just a highlight of some major feature updates. But there’s a large amount of features that we added. Here’s a rapid-fire list of other niceties that we added:

    • Multiple instances of Graphs can now be open at the same time
    • Data can now be imported by drag-and-drop
    • The subtitle finally shows the full file path, even in the isolated Flatpak
    • Custom transformations have gotten more powerful with the addition of new variables to use
    • Graphs now inhibits the session when unsaved data is still open
    • Added support for base-2 logarithmic scaling
    • Warnings are now displayed when trying to open a project from a beta version

And a whole bunch of bug-fixes, under-the-hood changes, and probably some features I have forgotten about. Overall, it’s our biggest update yet by far, and I am excited to finally be able to share the update soon.

As always, thanks to everyone who has been involved in this version. Graphs is not a one-person project. The bulk of the maintenance is done by me and Christoph, the other maintainer. And of course, we should thank the entire community. Both within GNOME projects (such as help from the design team, and the translation team), as well as outsiders that come with feedback, report or plain suggestions.

Getting the beta

This release is still in beta while we are ironing out the final issues. The expected release date is somewhere in the second week of may. In the meantime, feel free to test the beta. We are very happy for any feedback, especially in this period!

You can get the beta directly from flathub. First you need to add the flathub beta remote:

flatpak remote-add --if-not-exists flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo

Then, you can install the application:
flatpak install flathub-beta se.sjoerd.Graphs

To run the beta version by default, the following command can be used:

sudo flatpak make-current se.sjoerd.Graphs beta

Note that the sudo is neccesary here, as it sets the current branch on the system level. To install this on a per-user basis, the flag –user can be used in the previous commands. To switch back to the stable version simply run the above command replacing beta with stable.

The beta branch on update should get updated somewhat regularly. If you don’t feel like using the flathub-beta remote, or want the latest build. You can also get the release from the GitLab page, and build it in GNOME Builder.

Adrien Plazas

@Kekun

Monster World IV: Disassembly and Code Analysis

This winter I was bored and needed something new, so I spent lots of my free time disassembling and analysing Monster World IV for the SEGA Mega Drive. More specifically, I looked at the 2008 Virtual Console revision of the game, which adds an English translation to the original 1994 release.

My long term goal would be to fully disassemble and analyse the game, port it to C or Rust as I do, and then port it to the Game Boy Advance. I don’t have a specific reason to do that, I just think it’s a charming game from a dated but charming series, and I think the Monaster World series would be a perfect fit on the Game Boy Advance. Since a long time, I also wanted to experiment with disassembling or decompiling code, understanding what doing so implies, understanding how retro computing systems work, and understanding the inner workings of a game I enjoy. Also, there is not publicly available disassembly of this game as far as I know.

As Spring is coming, I sense my focus shifting to other projets, but I don’t want this work to be gone forever and for everyone, especially not for future me. Hence, I decided to publish what I have here, so I can come back to it later or so it can benefit someone else.

First, here is the Ghidra project archive. It’s the first time I used Ghidra and I’m certain I did plenty of things wrong, feedback is happily welcome! While I tried to rename things as my understanding of the code grew, it is still quite a mess of clashing name conventions, and I’m certain I got plenty of things wrong.

Then, here is the Rust-written data extractor. It documents how some systems work, both as code and actual documentation. It mainly extracts and documents graphics and their compression methods, glyphs and their compression methods, character encodings, and dialog scripts. Similarly, I’m not a Rust expert, I did my best but I’m certain there is area for improvement, and everything was constantly changing anyway.

There is more information that isn’t documented and is just floating in my head, such as how the entity system works, but I yet have to refine my understanding of it. Same goes for the optimimzations allowed by coding in assembly, such as using specific registers for commonly used arguments. Hopefully I will come back to this project and complete it, at least when it comes to disassembling and documenting the game’s code.

Felipe Borges

@felipeborges

RHEL 10 (GNOME 47) Accessibility Conformance Report

Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.

Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.

Getting a desktop environment to meet these requirements is a huge task and it’s only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…

Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there’s a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.

If you’re curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.

Huion devices in the desktop stack

This post attempts to explain how Huion tablet devices currently integrate into the desktop stack. I'll touch a bit on the Huion driver and the OpenTablet driver but primarily this explains the intended integration[1]. While I have access to some Huion devices and have seen reports from others, there are likely devices that are slightly different. Huion's vendor ID is also used by other devices (UCLogic and Gaomon) so this applies to those devices as well.

This post was written without AI support, so any errors are organic artisian hand-crafted ones. Enjoy.

The graphics tablet stack

First, a short overview of the ideal graphics tablet stack in current desktops. At the bottom is the physical device which contains a significant amount of firmware. That device provides something resembling the HID protocol over the wire (or bluetooth) to the kernel. The kernel typically handles this via the generic HID drivers [2] and provides us with an /dev/input/event evdev node, ideally one for the pen (and any other tool) and one for the pad (the buttons/rings/wheels/dials on the physical tablet). libinput then interprets the data from these event nodes, passes them on to the compositor which then passes them via Wayland to the client. Here's a simplified illustration of this:

Unlike the X11 api, libinput's API works both per-tablet and per-tool basis. In other words, when you plug in a tablet you get a libinput device that has a tablet tool capability and (optionally) a tablet pad capability. But the tool will only show up once you bring it into proximity. Wacom tools have sufficient identifiers that we can a) know what tool it is and b) get a unique serial number for that particular device. This means you can, if you wanted to, track your physical tool as it is used on multiple devices. No-one [3] does this but it's possible. More interesting is that because of this you can also configure the tools individually, different pressure curves, etc. This was possible with the xf86-input-wacom driver in X but only with some extra configuration, libinput provides/requires this as the default behaviour.

The most prominent case for this is the eraser which is present on virtually all pen-like tools though some will have an eraser at the tail end and others (the numerically vast majority) will have it hardcoded on one of the buttons. Changing to eraser mode will create a new tool (the eraser) and bring it into proximity - that eraser tool is logically separate from the pen tool and can thus be configured differently. [4]

Another effect of this per-tool behaviour is also that we know exactly what a tool can do. If you use two different styli with different capabilities (e.g. one with tilt and 2 buttons, one without tilt and 3 buttons), they will have the right bits set. This requires libwacom - a library that tells us, simply: any tool with id 0x1234 has N buttons and capabilities A, B and C. libwacom is just a bunch of static text files with a C library wrapped around those. Without libwacom, we cannot know what any individual tool can do - the firmware and kernel always expose the capability set of all tools that can be used on any particular tablet. For example: wacom's devices support an airbrush tool so any tablet plugged in will announce the capabilities for an airbrush even though >99% of users will never use an airbrush [5].

The compositor then takes the libinput events, modifies them (e.g. pressure curve handling is done by the compositor) and passes them via the Wayland protocol to the client. That protocol is a pretty close mirror of the libinput API so it works mostly the same. From then on, the rest is up to the application/toolkit.

Notably, libinput is a hardware abstraction layer and conversion of hardware events into others is generally left to the compositor. IOW if you want a button to generate a key event, that's done either in the compositor or in the application/toolkit. But the current versions of libinput and the Wayland protocol do support all hardware features we're currently aware of: the various stylus types (including Wacom's lens cursor and mouse-like "puck" devices) and buttons, rings, wheels/dials, and touchstrips on pads. We even support the rather once-off Dell Canvas Totem device.

Huion devices

Huion's devices are HID compatible which means they "work" out of the box but they come in two different modes, let's call them firmware mode and tablet mode. Each tablet device pretends to be three HID devices on the wire and depending on the mode some of those devices won't send events.

Firmware mode

This is the default mode after plugging the device in. Two of the HID devices exposed look like a tablet stylus and a keyboard. The tablet stylus is usually correct (enough) to work OOTB with the generic kernel drivers, it exports the buttons, pressure, tilt, etc. The buttons and strips/wheels/dials on the tablet are configured to send key events. For example, the Inspiroy 2S I have sends b/i/e/Ctrl+S/space/Ctrl+Alt+z for the buttons and the roller wheel sends Ctrl-/Ctrl= depending on direction. The latter are often interpreted as zoom in/out so hooray, things work OOTB. Other Huion devices have similar bindings, there is quite some overlap but not all devices have exactly the same key assignments for each button. It does of course get a lot more interesting when you want a button to do something different - you need to remap the key event (ideally without messing up your key map lest you need to type an 'e' later).

The userspace part is effectively the same, so here's a simplified illustration of what happens in kernel land:

Any vendor-specific data is discarded by the kernel (but in this mode that HID device doesn't send events anyway).

Tablet mode

If you read a special USB string descriptor from the English language ID, the device switches into tablet mode. Once in tablet mode, the HID tablet stylus and keyboard devices will stop sending events and instead all events from the device are sent via the third HID device which consists of a single vendor-specific report descriptor (read: 11 bytes of "here be magic"). Those bits represent the various features on the device, including the stylus features and all pad features as buttons/wheels/rings/strips (and not key events!). This mode is the one we want to handle the tablet properly. The kernel's hid-uclogic driver switches into tablet mode for supported devices, in userspace you can use e.g. huion-switcher. The device cannot be switched back to firmware mode but will return to firmware mode once unplugged.

Once we have the device in tablet mode, we can get true tablet data and pass it on through our intended desktop stack. Alas, like ogres there are layers.

hid-uclogic and udev-hid-bpf

Historically and thanks in large parts to the now-discontinued digimend project, the hid-uclogic kernel driver did do the switching into tablet mode, followed by report descriptor mangling (inside the kernel) so that the resulting devices can be handled by the generic HID drivers. The more modern approach we are pushing for is to use udev-hid-bpf which is quite a bit easer to develop for. But both do effectively the same thing: they overlay the vendor-specific data with a normal HID report descriptor so that the incoming data can be handled by the generic HID kernel drivers. This will look like this:

Notable here: the stylus and keyboard may still exist and get event nodes but never send events[6] but the uclogic/bpf-enabled device will be proper stylus/pad event nodes that can be handled by libinput (and thus the rest), with raw hardware data where buttons are buttons.

Challenges

Because in true manager speak we don't have problems, just challenges. And oh boy, we collect challenges as if we'd be organising the olypmics.

hid-uclogic and libinput

First and probably most embarrassing is that hid-uclogic has a different way of exposing event nodes than what libinput expects. This is largely my fault for having focused on Wacom devices and internalized their behaviour for long years. The hid-uclogic driver exports the wheels and strips on separate event nodes - libinput doesn't handle this correctly (or at all). That'd be fixable but the compositors also don't really expect this so there's a bit more work involved but the immediate effect is that those wheels/strips will likely be ignored and not work correctly. Buttons and pens work.

udev-hid-bpf and huion-switcher

hid-uclogic being a kernel driver has access to the underlying USB device. The HID-BPF hooks in the kernel currently do not, so we cannot switch the device into tablet mode from a BPF, we need it in tablet mode already. This means a userspace tool (read: huion-switcher) triggered via udev on plug-in and before the udev-hid-bpf udev rules trigger. Not a problem but it's one more moving piece that needs to be present (but boy, does this feel like the unix way...).

Huion's precious product IDs

By far the most annoying part about anything Huion is that until relatively recently (I don't have a date but maybe until 2 years ago) all of Huion's devices shared the same few USB product IDs. For most of these devices we worked around it by matching on device names but there were devices that had the same product id and device name. At some point libwacom and the kernel and huion-switcher had to implement firmware ID extraction and matching so we could differ between devices with the same 0256:006d usb IDs. Luckily this seems to be in the past now with modern devices now getting new PIDs for each individual device. But if you have an older device, expect difficulties and, worse, things to potentially break after firmware updates when/if the firmware identification string changes. udev-hid-bpf (and uclogic) rely on the firmware strings to identify the device correctly.

edit: and of course less than 24h after posting this I process a bug report about two completely different new devices sharing one of the product IDs

udev-hid-bpf and hid-uclogic

Because we have a changeover from the hid-uclogic kernel driver to the udev-hid-bpf files there are rough edges on "where does this device go". The general rule is now: if it's not a shared product ID (see above) it should go into udev-hid-bpf and not the uclogic driver. Easier to maintain, much more fire-and-forget. Devices already supported by udev-hid-bpf will remain there, we won't implement BPFs for those (older) devices, doubly so because of the aforementioned libinput difficulties with some hid-uclogic features.

Reverse engineering required

The newer tablets are always slightly different so we basically need to reverse-engineer each tablet to get it working. That's common enough for any device but we do rely on volunteers to do this. Mind you, the udev-hid-bpf approach is much simpler than doing it in the kernel, much of it is now copy-paste and I've even had quite some success to get e.g. Claude Code to spit out a 90% correct BPF on its first try. At least the advantage of our approach to change the report descriptor means once it's done it's done forever, there is no maintenance required because it's a static array of bytes that doesn't ever change.

Plumbing support into userspace

Because we're abstracting the hardware, userspace needs to be fully plumbed. This was a problem last year for example when we (slowly) got support for relative wheels into libinput, then wayland, then the compositors, then the toolkits to make it available to the applications (of which I think none so far use the wheels). Depending on how fast your distribution moves, this may mean that support is months and years off even when everything has been implemented. On the plus side these new features tend to only appear once every few years. Nonetheless, it's not hard to see why the "just sent Ctrl=, that'll do" approach is preferred by many users over "probably everything will work in 2027, I'm sure".

So, what stylus is this?

A currently unsolved problem is the lack of tool IDs on all Huion tools. We cannot know if the tool used is the two-button + eraser PW600L or the three-button-one-is-an-eraser-button PW600S or the two-button PW550 (I don't know if it's really 2 buttons or 1 button + eraser button). We always had this problem with e.g. the now quite old Wacom Bamboo devices but those pens all had the same functionality so it just didn't matter. It would matter less if the various pens would only work on the device they ship with but it's apparently quite possible to use a 3 button pen on a tablet that shipped with a 2 button pen OOTB. This is not difficult to solve (pretend to support all possible buttons on all tools) but it's frustrating because it removes a bunch of UI niceties that we've had for years - such as the pen settings only showing buttons that actually existed. Anyway, a problem currently in the "how I wish there was time" basket.

Summary

Overall, we are in an ok state but not as good as we are for Wacom devices. The lack of tool IDs is the only thing not fixable without Huion changing the hardware[7]. The delay between a new device release and driver support is really just dependent on one motivated person reverse-engineering it (our BPFs can work across kernel versions and you can literally download them from a successful CI pipeline). The hid-uclogic split should become less painful over time and the same as the devices with shared USB product IDs age into landfill and even more so if libinput gains support for the separate event nodes for wheels/strips/... (there is currently no plan and I'm somewhat questioning whether anyone really cares). But other than that our main feature gap is really the ability for much more flexible configuration of buttons/wheels/... in all compositors - having that would likely make the requirement for OpenTabletDriver and the Huion tablet disappear.

OpenTabletDriver and Huion's own driver

The final topic here: what about the existing non-kernel drivers?

Both of these are userspace HID input drivers which all use the same approach: read from a /dev/hidraw node, create a uinput device and pass events back. On the plus side this means you can do literally anything that the input subsystem supports, at the cost of a context switch for every input event. Again, a diagram on how this looks like (mostly) below userspace:

Note how the kernel's HID devices are not exercised here at all because we parse the vendor report, create our own custom (separate) uinput device(s) and then basically re-implement the HID to evdev event mapping. This allows for great flexibility (and control, hence the vendor drivers are shipped this way) because any remapping can be done before you hit uinput. I don't immediately know whether OpenTabletDriver switches to firmware mode or maps the tablet mode but architecturally it doesn't make much difference.

From a security perspective: having a userspace driver means you either need to run that driver daemon as root or (in the case of OpenTabletDriver at least) you need to allow uaccess to /dev/uinput, usually via udev rules. Once those are installed, anything can create uinput devices, which is a risk but how much is up for interpretation.

[1] As is so often the case, even the intended state does not necessarily spark joy
[2] Again, we're talking about the intended case here...
[3] fsvo "no-one"
[4] The xf86-input-wacom driver always initialises a separate eraser tool even if you never press that button
[5] For historical reasons those are also multiplexed so getting ABS_Z on a device has different meanings depending on the tool currently in proximity
[6] In our udev-hid-bpf BPFs we hide those devices so you really only get the correct event nodes, I'm not immediately sure what hid-uclogic does
[7] At which point Pandora will once again open the box because most of the stack is not yet ready for non-Wacom tool ids

Bilal Elmoussaoui

@belmoussaoui

goblint: A Linter for GObject C Code

Over the past week, I’ve been building goblint, a linter specifically designed for GObject-based C codebases.

If you know Rust’s clippy or Go’s go vet, think of goblint as the same thing for GObject/GLib.

Why this exists

A large part of the Linux desktop stack (GTK, Mutter, Pango, NetworkManager) is built on GObject. These projects have evolved over decades and carry a lot of patterns that predate newer GLib helpers, are easy to misuse, or encode subtle lifecycle invariants that nothing verifies.

This leads to issues like missing dispose/finalize/constructed chain-ups (memory leaks or undefined behavior), incorrect property definitions, uninitialized GError* variables, or function declarations with no implementation.

These aren’t theoretical. This GTK merge request recently fixed several missing chain-ups in example code.

Despite this, the C ecosystem lacks a linter that understands GObject semantics. goblint exists to close that gap.

What goblint checks

goblint ships with 35 rules across different categories:

  • Correctness: Real bugs like non-canonical property names, uninitialized GError*, missing PROP_0
  • Suspicious: Likely mistakes like missing implementations or redundant NULL checks
  • Style: Idiomatic GLib usage (g_strcmp0, g_str_equal())
  • Complexity: Suggests modern helpers (g_autoptr, g_clear_*, g_set_str())
  • Performance: Optimizations like G_PARAM_STATIC_STRINGS or g_object_notify_by_pspec()
  • Pedantic: Consistency checks (macro semicolons, matching declare/define pairs)

23 out of 35 rules are auto-fixable. You should apply fixes one rule at a time to review the changes:

goblint --fix --only use_g_strcmp0
goblint --fix --only use_clear_functions

CI/CD Integration

goblint fits into existing pipelines.

GitHub Actions

- name: Run goblint
  run: goblint --format sarif > goblint.sarif

- name: Upload SARIF results
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: goblint.sarif

Results show up in the Security tab under "Code scanning" and inline on pull requests.

GitLab CI

goblint:
  image: ghcr.io/bilelmoussaoui/goblint:latest
  script:
    - goblint --format sarif > goblint.sarif
  artifacts:
    reports:
      sast: goblint.sarif

Results appear inline in merge requests.

Configuration

Rules default to warn, and can be tuned via goblint.toml:

min_glib_version = "2.40"  # Auto-disable rules for newer versions

[rules]
g_param_spec_static_name_canonical = "error"  # Make critical
use_g_strcmp0 = "warn"  # Keep as warning
use_g_autoptr_inline_cleanup = "ignore"  # Disable

# Per-rule ignore patterns
missing_implementation = { level = "error", ignore = ["src/backends/**"] }

You can adopt it gradually without fixing everything at once.

Try it

# Run via container
podman run --rm -v "$PWD:/workspace:Z" ghcr.io/bilelmoussaoui/goblint:latest

# Install locally
cargo install --git https://github.com/bilelmoussaoui/goblint goblint

# Usage
goblint              # Lint current directory
goblint --fix        # Apply automatic fixes
goblint --list-rules # Inspect available rules

The project is early, so feedback is especially valuable (false positives, missing checks, workflow issues, etc.).


Note: The project was originally named "goblin" but was renamed to "goblint" to avoid conflicts with the existing goblin crate for parsing binary formats.

What is new in GNOME Kiosk 50

GNOME Kiosk, the lightweight, specialized compositor continues to evolve In GNOME 50 by adding new configuration options and improving accessibility.

Window configuration

User configuration file monitoring

The user configuration file gets reloaded when it changes on disk, so that it is not necessary to restart the session.

New placement options

New configuration options to constrain windows to monitors or regions on screen have been added:

  • lock-on-monitor: lock a window to a monitor.
  • lock-on-monitor-area: lock to an area relative to a monitor.
  • lock-on-area: lock to an absolute area.

These options are intended to replicate the legacy „Zaphod“ mode from X11, where windows could be tied to a specific monitor. It even goes further than that, as it allows to lock windows on a specific area on screen.

The window/monitor association also remains true when a monitor is disconnected. Take for example a setup where each monitor, on a multiple monitors configuration, shows different timetables. If one of the monitors is disconnected (for whatever reason), the timetable showing on that monitor should not be moved to another remaining monitor. The lock-on-monitor option prevents that.

Initial map behavior was tightened

Clients can resize or change their state  before the window is mapped, so size, position, and fullscreen as set from the configuration could be skipped. Kiosk now makes sure to apply configured size, position, and fullscreen on first map when the initial configuration was not applied reliably.

Auto-fullscreen heuristics were adjusted

  • Only normal windows are considered when checking whether another window already covers the monitor (avoids false positives from e.g. xwaylandvideobridge).
  • The current window is excluded when scanning “other” fullscreen sized windows (fixes Firefox restoring monitor-sized geometry).
  • Maximized or fullscreen windows are no longer treated as non-resizable so toggling fullscreen still works when the client had already maximized.

Compositor behavior and command-line options

New command line options have been added:

  • --no-cursor: hides the pointer.
  • --force-animations: forces animations to be enabled.
  • --enable-vt-switch: restores VT switching with the keyboard.

The --no-cursor option can be used to hide the pointer cursor entirely for setups where user input does not involve a pointing device (it is similar to the -nocursor option in Xorg).

Animations can now be disabled using the desktop settings, and will also be automatically disabled when the backend reports no hardware-accelerated rendering for performance purpose. The option --force-animations can be used to forcibly enable animations in that case, similar to GNOME Shell.

The native keybindings, which include VT switching keyboard shortcuts are now disabled by default for kiosk hardening. Applications that rely on the user being able to switch to another console VT on Linux, such as e.g Anaconda, will need to explicit re-enable VT switching using --enable-vt-switch in their session.

These options need to be passed from the command line starting gnome-kiosk, which would imply updating the systemd definitions files, or better, create a custom one (taking example on the the ones provided with the GNOME Kiosk sessions).

Accessibility

Accessibility panel

An example of an accessibility panel is now included, to control the platform accessibility settings with a GUI. It is a simple Python application using GTK4.

(The gsettings options are also documented in the CONFIG.md file.)

Screen magnifier

Desktop magnification is now implemented, using the same settings as the rest of the GNOME desktop (namely screen-magnifier-enabled, mag-factor, see the CONFIG.md file for details).

It can can be enabled from the accessibility panel or from the keyboard shortcuts through the gnome-settings-daemon’s “mediakeys” plugin.

Accessibility settings

The default systemd session units now start the gnome-settings-daemon accessibility plugin so that Orca (the screen reader) can be enabled through the dedicated keyboard shortcut.

Notifications

  • A new, optional notification daemon implements org.freedesktop.Notifications and org.gtk.Notifications using GTK 4 and libadwaita.
  • A small utility to send notifications via org.gtk.Notifications is also provided.

Input sources

GNOME Kiosk was ported to the new Mutter’s keymap API which allows remote desktop servers to mirror the keyboard layout used on the client side.

Session files and systemd

    • X-GDM-SessionRegister is now set to false in kiosk sessions as GNOME Kiosk does not register the session itself (unlike GNOME Shell). That fixes a hang when terminating the session.
    • Script session: systemd is no longer instructed to restart the session when the script exits, so that users can logout of the script session when the script terminates.

Self hosting as much of my online presence as practical

Because I am bad at giving up on things, I’ve been running my own email server for over 20 years. Some of that time it’s been a PC at the end of a DSL line, some of that time it’s been a Mac Mini in a data centre, and some of that time it’s been a hosted VM. Last year I decided to bring it in house, and since then I’ve been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.

First: my ISP doesn’t guarantee a static IPv4 unless I’m on a business plan and that seems like it’d cost a bunch more, so I’m doing what I described here: running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.

The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We’re not talking rackmount Xeon levels of performance, but it’s entirely adequate for everything I’m doing here.

So. Let’s talk about the services I’m hosting.

Web

This one’s trivial. I’m not really hosting much of a website right now, but what there is is served via Apache with a Let’s Encrypt certificate. Nothing interesting at all here, other than the proxying that’s going to be relevant later.

Email

Inbound email is easy enough. I’m running Postfix with a pretty stock configuration, and my MX records point at me. The same Let’s Encrypt certificate is there for TLS delivery. I’m using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.

Outbound email? That’s harder. I’m on a residential IP address, so if I send email directly nobody’s going to deliver it. Going via my OVH address isn’t going to be a lot better. I have a Google Workspace, so in the end I just made use of Google’s SMTP relay service. There’s various commerical alternatives available, I just chose this one because it didn’t cost me anything more than I’m already paying.

Blog

My blog is largely static content generated by Hugo. Comments are Remark42 running in a Docker container. If you don’t want to handle even that level of dynamic content you can use a third party comment provider like Disqus.

Mastodon

I’m deploying Mastodon pretty much along the lines of the upstream compose file. Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the “X-Forwarded-Proto” header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that’s where we just came from.

Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.

Bluesky

I’m arguably cheating here. Bluesky’s federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts. User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user’s actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions, other than using Apache as the frontend proxy rather than nginx, and it’s all been working fine since then. In terms of ensuring that my data remains under my control, it’s sufficient.

Backups

I’m using borgmatic, backing up to a local Synology NAS and also to my parents’ home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check that I’m actually able to restore them.

Conclusion

Most of what I post is now stored on a system that’s happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it’s working it largely just keeps working, and there’s a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.

Gedit Technology

@geditblog

gedit 50.0 released

gedit 50.0 has been released! Here are the highlights since version 49.0 from January. (Some sections are a bit technical).

No Large Language Models AI tools

The gedit project now disallows the use of LLMs for contributions.

The rationales:

Programming can be seen as a discipline between art and engineering. Both art and engineering require practice. It's the action of doing - modifying the code - that permits a deep understanding of it, to ensure correctness and quality.

When generating source code with an LLM tool, the real sources are the inputs given to it: the training dataset, plus the human commands.

Adding something generated to the version control system (e.g., Git) is usually frown upon. Moreover, we aim for reproducible results (to follow the best-practices of reproducible builds, and reproducible science more generally). Modifying afterwards something generated is also a bad practice.

Releasing earlier, releasing more often

To follow more closely the release early, release often mantra, gedit aims for a faster release cadence in 2026, to have smaller deltas between each version. Future will tell how it goes.

The website is now responsive

Since last time, we've made some efforts to the website. Small-screen-device readers should have a more pleasant experience.

libgedit-amtk becomes "The Good Morning Toolkit"

Amtk originally stands for "Actions, Menus and Toolbars Kit". There was a desire to expand it to include other GTK extras that are useful for gedit needs.

A more appropriate name would be libgedit-gtk-extras. But renaming the module - not to mention the project namespace - is more work. So we've chosen to simply continue with the name Amtk, just changing its scope and definition. And - while at it - sprinkle a bit of fun :-)

So there are now four libgedit-* modules:

  • libgedit-gfls, aka "libgedit-glib-extras", currently for "File Loading and Saving";
  • libgedit-amtk, aka "libgedit-gtk-extras" - it extends GTK for gedit needs at the exception of GtkTextView;
  • libgedit-gtksourceview - it extends GtkTextView and is a fork of GtkSourceView, to evolve the library for gedit needs;
  • libgedit-tepl - the Text Editor Product Line library, it provides a high-level API, including an application framework for creating more easily new text editors.

Note that all of these are still constantly in construction.

Some code overhaul

Work continues steadily inside libgedit-gfls and libgedit-gtksourceview to streamline document loading.

You might think that it's a problem solved (for many years), but it's actually not the case for gedit. Many improvements are still possible.

Another area of interest is the completion framework (part of libgedit-gtksourceview), where changes are still needed to make it fully functional under Wayland. The popup windows are sometimes misplaced. So between gedit 49.0 and 50.0 some progress has been made on this. The Word Completion gedit plugin works fine under Wayland, while the LaTeX completion with Enter TeX is still buggy since it uses more features from the completion system.

Three Little Rust Crates

I published three Rust crates:

  • name-to-handle-at: Safe, low-level Rust bindings for Linux name_to_handle_at and open_by_handle_at system calls
  • pidfd-util: Safe Rust wrapper for Linux process file descriptors (pidfd)
  • listen-fds: A Rust library for handling systemd socket activation

They might seem like rather arbitrary, unconnected things – but there is a connection!

systemd socket activation passes file descriptors and a bit of metadata as environment variables to the activated process. If the activated process exec’s another program, the file descriptors get passed along because they are not CLOEXEC. If that process then picks them up, things could go very wrong. So, the activated process is supposed to mark the file descriptors CLOEXEC, and unset the socket activation environment variables. If a process doesn’t do this for whatever reason however, the same problems can arise. So there is another mechanism to help prevent it: another bit of metadata contains the PID of the target. Processes can check it against their own PID to figure out if they were the target of the activation, without having to depend on all other processes doing the right thing.

PIDs however are racy because they wrap around pretty fast, and that’s why nowadays we have pidfds. They are file descriptors which act as a stable handle to a process and avoid the ID wrap-around issue. Socket activation with systemd nowadays also passes a pidfd ID. A pidfd ID however is not the same as a pidfd file descriptor! It is the 64 bit inode of the pidfd file descriptor on the pidfd filesystem. This has the advantage that systemd doesn’t have to install another file descriptor in the target process which might not get closed. It can just put the pidfd ID number into the $LISTEN_PIDFDID environment variable.

Getting the inode of a file descriptor doesn’t sound hard. fstat(2) fills out struct stat which has the st_ino field. The problem is that it has a type of ino_t, which is 32 bits on some systems so we might end up with a process identifier which wraps around pretty fast again.

We can however use the name_to_handle syscall on the pidfd to get a struct file_handle with a f_handle field. The man page helpfully says that “the caller should treat the file_handle structure as an opaque data type”. We’re going to ignore that, though, because at least on the pidfd filesystem, the first 64 bits are the 64 bit inode. With systemd already depending on this and the kernel rule of “don’t break user-space”, this is now API, no matter what the man page tells you.

So there you have it. It’s all connected.

Obviously both pidfds and name_to_handle have more exciting uses, many of which serve my broader goal: making Varlink services a first-class citizen. More about that another time.

Lennart Poettering

@mezcalero

Mastodon Stories for systemd v260

On March 17 we released systemd v260 into the wild.

In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd260 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 21 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v261), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

My series for v261 will begin in a few weeks most likely, under the #systemd261 hash tag.

In case you are interested, here is the corresponding blog story for systemd v259, here for v258, here for v257, and here for v256.

GNOME Foundation News

@foundationblog

Introducing the GNOME Fellowship program

Sustaining GNOME by directly funding contributors

The GNOME Foundation is excited to announce the GNOME Fellowship program, a new initiative to fund community members working on the long-term sustainability of the GNOME project. We’re now accepting applications for our inaugural fellowship cycle, beginning around May 2026.

GNOME has always thrived because of its contributors: people who invest their time and expertise to build and maintain the desktop, applications, and platform that millions rely on. But open source contribution often depends on volunteers finding time alongside other commitments, or on companies choosing to fund development amongst competing priorities. Many important areas of the project – the less glamorous but critical infrastructure work – can go underinvested.

The fellowship program changes that. Thanks to the generous support of Friends of GNOME donors, we can now directly fund contributors to focus on what matters most for GNOME’s future. Programs such as this rely on ongoing support from our donors, so if you would like to see this and similar programs continue in future, please consider setting up a recurring donation.

What’s a Fellowship?

A fellowship is funding for an individual to spend dedicated time over a 12 month period working in an area where they have expertise. Unlike traditional contracts with rigid scopes and deliverables, fellowships are built on trust. We’re backing people and the type of work they do, giving them the flexibility to tackle problems as they find them.

This approach reduces bureaucratic overhead for both contributors and the Foundation. It lets talented people do what they do best: identify important problems and solve them.

Focus: Sustainability

For this first cycle, we’re seeking proposals focused on sustainability work that makes GNOME more maintainable, efficient, and productive for developers. This includes areas like build systems, CI/CD infrastructure, testing frameworks, developer tooling, documentation, accessibility, and reducing technical debt.

We’re not funding new features this round. Instead, we want to invest in the foundations that make future development and contributions easier and faster. The goal is for each fellowship to leave the project in better shape than we found it.

Apply Now

We have funding for at least one 12-month fellowship paid between $70,000 and $100,000 USD per year based on experience and location. Applicants can propose full-time, half-time work, or either – half-time proposals may allow us to support multiple fellows.

Applications are open to anyone with a track record in GNOME or relevant experience, with some restrictions due to US sanctions compliance. A GNOME Foundation Board committee will review applications and select fellows for this inaugural cycle.

Full details, application requirements, and FAQ are available at fellowship.gnome.org. Applications close on 20th April 2026.

Thank You to Friends of GNOME

This program is possible because of the individuals and organizations who support GNOME through Friends of GNOME donations. When we ask for donations, funding contributor work is exactly the kind of initiative we have in mind. If you’d like to sustain this program beyond its first year, consider becoming a Friend of GNOME. A recurring donation, no matter how small, gives us the predictability to expand this program and others like it.

Looking Ahead

This is a pilot program. We’re optimistic, and if it succeeds, we hope to sustain and grow the fellowship program in future years, funding more contributors across more areas of GNOME. We believe this model can become a sustainable way to invest in the project’s long-term health.

We can’t wait to see your proposals!

Christian Schaller

@cschalle

Using AI to create some hardware tools and bring back the past

As I talked about in a couple of blog posts now I been working a lot with AI recently as part of my day to day job at Red Hat, but also spending a lot of evenings and weekend time on this (sorry kids pappa has switched to 1950’s mode for now). One of the things I spent time on is trying to figure out what the limitations of AI models are and what kind of use they can have for Open Source developers.

One thing to mention before I start talking about some of my concrete efforts is that I more and more come to conclude that AI is an incredible tool to hypercharge someone in their work, but I feel it tend to fall short for fully autonomous systems. In my experiments AI can do things many many times faster than you ordinarily could, talking specifically in the context of coding here which is what is most relevant for those of us in the open source community.

So one annoyance I had for years as a Linux user is that I get new hardware which has features that are not easily available to me as a Linux user. So I have tried using AI to create such applications for some of my hardware which includes an Elgato Light and a Dell Ultrasharp Webcam.

I found with AI and this is based on using Google Gemini, Claude Sonnet and Opus and OpenAI codex, they all required me to direct and steer the AI continuously, if I let the AI just work on its own, more often than not it would end up going in circles or diverging from the route it was supposed to go, or taking shortcuts that makes wanted output useless.On the other hand if I kept on top of the AI and intervened and pointed it in the right direction it could put together things for me in very short time spans.
My projects are also mostly what I would describe as end leaf nodes, the kind of projects that already are 1 person projects in the community for the most part. There are extra considerations when contributing to bigger efforts, and I think a point I seen made by others in the community too is that you need to own the patches you submit, meaning that even if an AI helped your write the patch you still need to ensure that what you submit is in a state where it can be helpful and is merge-able. I know that some people feel that means you need be capable of reviewing the proposed patch and ensuring its clean and nice before submitting it, and I agree that if you expect your patch to get merged that has to be the case. On the other hand I don’t think AI patches are useless even if you are not able to validate them beyond ‘does it fix my issue’.

My friend and PipeWire maintainer Wim Taymans and I was talking a few years ago about what I described at the time as the problem of ‘bad quality patches’, and this was long before AI generated code was a thing. Wim response to me which I often thought about afterwards was “a bad patch is often a great bug report”. And that would hold true for AI generated patches to. If someone makes a patch using AI, a patch they don’t have the ability to code review themselves, but they test it and it fixes their problem, it might be a good bug report and function as a clearer bug report than just a written description by the user submitting the report. Of course they should be clear in their bug report that they don’t have the skills to review the patch themselves, but that they hope it can be useful as a tool for pinpointing what isn’t working in the current codebase.

Anyway, let me talk about the projects I made. They are all found on my personal website Linuxrising.org a website that I also used AI to update after not having touched the site in years.

Elgato Light GNOME Shell extension

Elgato Light GNOME Shell extension

Elgato Light GNOME Shell extension

The first project I worked on is a GNOME Shell extension for controlling my Elgato Key Wifi Lamp. The Elgato lamp is basically meant for podcasters and people doing a lot of video calls to be able to easily configure light in their room to make a good recording. The lamp announces itself over mDNS, and thus can be controlled via Avahi. For Windows and Mac the vendor provides software to control their lamp, but unfortunately not for Linux.

There had been GNOME Shell extensions for controlling the lamp in the past, but they had not been kept up to date and their feature set was quite limited. Anyway, I grabbed one of these old extensions and told Claude to update it for latest version of GNOME. It took a few iterations of testing, but we eventually got there and I had a simple GNOME Shell extension that could turn the lamp off and on and adjust hue and brightness. This was a quite straightforward process because I had code that had been working at some point, it just needed some adjustments to work with current generation of GNOME Shell.

Once I had the basic version done I decided to take it a bit further and try to recreate the configuration dialog that the windows application offers for the full feature set which took me quite a bit of back and forth with Claude. I found that if I ask Claude to re-implement from a screenshot it recreates the functionality of the user interface first, meaning that it makes sure that if the screenshot has 10 buttons, then you get a GUI with 10 buttons. You then have to iterate both on the UI design, for example telling Claude that I want a dark UI style to match the GNOME Shell, and then I also had to iterate on each bit of functionality in the UI. Like most of the buttons in the UI didn’t really do anything from the start, but when you go back and ask Claude to add specific functionality per button it is usually able to do so.

Elgato Light Settings Application

Elgato Light Settings Application

So this was probably a fairly easy thing for the AI because all the functionality of the lamp could be queried over Avahi, there was no ‘secret’ USB registers to be set or things like that.
Since the application was meant to be part of the GNOME Shell extension I didn’t want to to have any dependency requirements that the Shell extension itself didn’t have, so I asked Claude to make this application in JavaScript and I have to say so far I haven’t seen any major differences in terms of the AIs ability to generate different languages. The application now reproduce most of the functionality of the Windows application. Looking back I think it probably took me a couple of days in total putting this tool together.

Dell Ultrasharp Webcam 4K

Dell UltraSharp 4K settings application for Linux

Dell UltraSharp 4K settings application for Linux

The second application on the list is a controller application for my Dell UltraSharp Webcam 4K UHD (WB7022). This is a high end Webcam I that have been using for a while and it is comparable to something like the Logitech BRIO 4K webcam. It has mostly worked since I got it with the generic UVC driver and I been using it for my Google Meetings and similar, but since there was no native Linux control application I could not easily access a lot of the cameras features. To address this I downloaded the windows application installer and installed it under Windows and then took a bunch of screenshots showcasing all features of the application. I then fed the screenshots into Claude and told it I wanted a GTK+ version for Linux of this application. I originally wanted to have Claude write it in Rust, but after hitting some issues in the PipeWire Rust bindings I decided to just use C instead.

I took me probably 3-4 days with intermittent work to get this application working and Claude turned out to be really good and digging into Windows binaries and finding things like USB property values. Claude was also able to analyze the screenshots and figure out the features the application needed to have. It was a lot of trial and error writing the application, but one way I was able to automate it was by building a screenshot option into the application, allowing it to programmatically take screenshots of itself. That allowed me to tell Claude to try fixing something and then check the screenshot to see if it worked without me having to interact with the prompt. Also to get the user interface looking nicer, once I had all the functionality in I asked Claude to tweak the user interface to follow the guidelines of the GNOME Human Interface Guidelines, which greatly improved the quality of the UI.

At this point my application should have almost all the features of the Windows application. Since it is using PipeWire underneath it is also tightly integrated with the PipeWire media graph, allowing you to see it connect and work with your application in PipeWire patchbay applications like Helvum. The remaining features are software features of Dell’s application, like background removal and so on, but I think that if I decided to to implement that it should be as a standalone PipeWire tool that can be used with any camera, and not tied to this specific one.

Red Hat Planet

Red Hat Vulkan Globe

The application shows the worlds Red Hat offices and include links to latest Red Hat news.


The next application on my list is called Red Hat Planet. It is mostly a fun toy, but I made it to partly revisit the Xtraceroute modernisation I blogged about earlier. So as I mentioned in that blog, Xtraceroute while cute isn’t really very useful IMHO, since the way the modern internet works rarely have your packets jump around the world. Anyway, as people pointed out after I posted about the port is that it wasn’t an actual Vulkan application, it was a GTK+ application using the GTK+ Vulkan backend. The Globe animation itself was all software rendered.

I decided if I was going to revisit the Vulkan problem I wanted to use a different application idea than traceroute. The idea I had was once again a 3D rendered globe, but this one reading the coordinates of Red Hats global offices from a file and rendering them on the globe. And alongside that provide clickable links to recent Red Hat news items. So once again maybe not the worlds most useful application, but I thought it was a cute idea and hopefully it would allow me to create it using actual Vulkan rendering this time.

Creating this turned out to be quite the challenge (although it seems to have gotten easier since I started this effort), with Claude Opus 4.6 being more capable at writing Vulkan code than Claude Sonnet, Google Gemini or OpenAI Codex was when I started trying to create this application.
When I started this project I had to keep extremely close tabs on the AI and what is was doing in order to force it to keep working on this as a Vulkan application, as it kept wanting to simplify with Software rendering or OpenGL and sometimes would start down that route without even asking me. That hasn’t happened more recently, so maybe that was a problem of AI of 5 Months ago.

I also discovered as part of this that rendering Vulkan inside a GTK4 application is far from trivial and would ideally need the GTK4 developers to create such a widget to get rendering timings and similar correct. It is one of the few times I have had Claude outright say that writing a widget like that was beyond its capabilities (haven’t tried again so I don’t know if I would get the same response today). So I started moving the application to SDL3 first, which worked as I got a spinning globe with red dots on, but came with its own issues, in the sense that SDL is not a UI toolkit as such. So while I got the globe rendered and working the AU struggled badly with the news area when using SDL.

So I ended up trying to port the application to Qt, which again turned out to be non-trivial in terms of how much time it took with trial and error to get it right. I think in my mind I had a working globe using Vulkan, how hard could it be to move it from SDL3 to Qt, but there was a million rendering issues. In fact I ended up using the Qt Vulkan rendering example as a starting point in the end and then ‘porting’ the globe over bit by bit, testing it for each step, to finally get a working version. The current version is a Vulkan+Qt app and it basically works, although it seems the planet is not spinning correctly on AMD systems at the moment, while it seems to work well on Intel and NVIDIA systems.

WMDock

WmDock fullscreen with config application

WmDock fullscreen with config application.


This project came out of a chat with Matthias Clasen over lunch where I mused about if Claude would be able to bring the old Window Maker dockapps to GNOME and Wayland. Turns out the answer is yes although the method of doing so changed as I worked on it.

My initial thought was for Claude to create a shim that the old dockapps could be compiled against, without any changes. That worked, but then I had a ton of dockapps showing up in things like the alt+tab menu. It also required me to restart my GNOME Shell session all the time as I was testing the extension to house the dockapps. In the end I decided that since a lot of the old dockapps don’t work with modern Linux versions anyway, and thus they would need to be actively ported, I should accept that I ship the dockapps with the tool and port them to work with modern linux technologies. This worked well and is what I currently have in the repo, I think the wildest port was porting the old dockapp webcam app from V4L1 to PipeWire. Although updating the soundcontroller from ESD to PulesAudio was also a generational jump.

XMMS resuscitated

XMMS brought back to life

XMMS brought back to life


So the last effort I did was reviving the old XMMS media player. I had tried asking Claude to do this for Months and it kept failing, but with Opus 4.6 it plowed through it and had something working in a couple of hours, with no input from me beyond kicking it off. This was a big lift,moving it from GTK2 and Esound, to GTK4, GStreamer and PipeWire. One thing I realized is that a challenge with bringing an old app back is that since keeping the themeable UI is a big part of this specific application adding new features is a little kludgy. Anyway I did set it up to be able to use network speakers through PipeWire and also you can import your Spotify playlists and play those, although you need to run the Spotify application in the background to be able to play sound on your local device.

Monkey Bubble
Monkey Bubble game
Monkey Bubble was a game created in the heyday of GNOME 2 and while I always thought it was a well made little game it had never been updated to never technologies. So I asked Claude to port it to GTK4 and use GStreamer for audio.This port was fairly straightforward with Claude having little problems with it. I also asked Claude to add highscores using the libmanette library and network game discovery with Avahi. So some nice little.improvements.

All the applications are available either as Flatpaks or Fedora RPMS, through the gitlab project page, so I hope people enjoy these applications and tools. And enoy the blasts from the past as much as I did.

Worries about Artifical Intelligence

When I speak to people both inside Red Hat and outside in the community I often come across negativity or even sometimes anger towards Artificial Intelligence in the coding space. And to be clear I to worry about where things could be heading and how it will affect my livelihood too, so I am not unsympathetic to those worries at all. I probably worry about these things at least a few times a day. At the same time I don’t think we can hide from or avoid this change, it is happening with or without us. We have to adapt to a world where this tool exists, just like our ancestors have adapted to jobs changing due to industrialization and science before. So do I worry about the future, yes I do. Do I worry about how I might personally get affected by this? yes, I do. Do I worry about how society might change for the worse due to this? yes, I do. But I also remind myself that I don’t know the future and that people have found ways to move forward before and society has survived and thrived. So what I can control is that I try to be on top of these changes myself and take advantage of them where I can and that is my recommendation to the wider open source community on this too. By leveraging them to move open source forward and at the same time trying to put our weight on the scale towards the best practices and policies around Artificial Intelligence.

The Next Test and where AI might have hit a limit for me.

So all these previous efforts did teach me a lot of tricks and helped me understand how I can work with an AI agent like Claude, but especially after the success with the webcam I decided to up the stakes and see if I could use Claude to help me create a driver for my Plustek OpticFilm 8200i scanner. So I have zero backround in any kind of driver development and probably less than zero in the field of scanner driver specifically. So I ended up going down a long row of deadends on this journey and I to this day has not been able to get a single scan out of the scanner with anything that even remotely resembles the images I am trying to scan.

My idea was to have Claude analyse the Windows and Mac driver and build me a SANE driver based on that, which turned out to be horribly naive and lead nowhere. One thing I realized is that I would need to capture USB traffic to help Claude contextualize some of the findings it had from looking at the Windows and Mac drivers.I started out with Wireshark and feeding Claude with the Wireshark capture logs. Claude quite soon concluded that the Wireshark logs wasn’t good enough and that I needed lower level traffic capture. Buying a USB packet analyzer isn’t cheap so I had the idea that I could use one of the ARM development boards floating around the house as a USB relay, allowing me to perfectly capture the USB traffic. With some work I did manage to set up my LibreComputer Solitude AML-S905D3-CC arm board going and setting it in device mode. I also had a usb-relay daemon going on the board. After a lot of back and forth, and even at one point trying to ask Claude to implement a missing feature in the USB kernel stack, I realized this would never work and I ended up ordering a Beagle USB 480 USB hardware analyzer.

At about the same time I came across the chipset documentation for the Genesys Logic GL845 chip in the scanner. I assumed that between my new USB analyzer and the chipset docs this would be easy going from here on, but so far no. I even had Claude decompile the windows driver using ghidra and then try to extract the needed information needed from the decompiled code.
I bought a network controlled electric outlet so that Claude can cycle the power of the scanner on its own.

So the problem here is that with zero scanner driver knowledge I don’t even know what I should be looking for, or where I should point Claude to, so I keept trying to brute force it by trial and error. I managed to make SANE detect the scanner and I managed to get motor and lamp control going, but that is about it. I can hear the scanner motor running and I ask for a scan, but I don’t know if it moves correctly. I can see light turning on and off inside the scanner, but I once again don’t know if it is happening at the correct times and correct durations. And Claude has of course no way of knowing either, relying on me to tell it if something seems like it has improved compared to how it was.

I have now used Claude to create two tools for Claude to use, once using a camera to detect what is happening with the light inside the scanner and the other recording sound trying to compare the sound this driver makes compared to the sounds coming out when doing a working scan with the MacOS X application. I don’t know if this will take me to the promised land eventually, but so far I consider my scanner driver attempt a giant failure. At the same time I do believe that if someone actually skilled in scanner driver development was doing this they could have guided Claude to do the right things and probably would have had a working driver by now.

So I don’t know if I hit the kind of thing that will always be hard for an AI to do, as it has to interact with things existing in the real world, or if newer versions of Claude, Gemini or Codex will suddenly get past a threshold and make this seem easy, but this is where things are at for me at the moment.