buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Code is a liability, not an infinitely reproducible machine that requires no labor inputs to operate. It is a brittle machine that eventually "wears out" and needs a top-to-bottom refactoring to keep being in good working order.
It's code's capabilities that are assets.
«"Writing code" is about making code that runs well. "Software engineering" is about making code that fails well»
And then came #AI.
@pluralistic nails it:
https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes
Kudos to the @ACM:
"Beginning January 2026, all ACM publications and related artifacts in the ACM Digital Library will be made open access."
https://dl.acm.org/openaccess
@green that's the neat part - NOONE DID!
And in 2019 I say the results of that: A person certified ready for university but unable to use a mouse and a keyboard.
And that person wasn't some impoverished barely got started in school in time kinda person with run-down clothing and barely able to tie their laces! But someone able to afford the latest iPhone and being decked out in drip that was easily 4 digits.
I gather they've finally taken this measure because of the preponderance AI-generated slop, but with any luck these other issues will improve too. The arXiv press release states “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv” so it does sound like they are acknowledging the other problems and intend to enforce their rules more strictly in the future.
"arXiv says it will no longer accept Computer Science papers that are still under review due to the wave of AI-generated ones it has received."
From https://infosec.exchange/users/josephcox/statuses/115486903712973154
Update. Here's how #arXiv is dealing with a similar problem in computer science.
https://blog.arxiv.org/2025/10/31/attention-authors-updated-practice-for-review-articles-and-position-papers-in-arxiv-cs-category/
"Before being considered for submission to arXiv’s #CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review…In the past few years, arXiv has been flooded with papers. Generative #AI / #LLMs have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category."
What you're seeing here is that for most categories, there is a linear increase in the number of submissions to the category year-over-year up until the end of the data series in 2021. Computer science is dramatically different: its increase looks exponential, and it looks like its rate of increase may have accelerated circa 2017. The chart on the right, which is the same data shown proportional instead of as raw counts, suggests computer science might be "eating" mathematics starting around 2017.
2017 is around when generative AI papers started to appear in large quantities. There was a significant advance in machine learning published around 2018 but known before then that made deep learning significantly more effective. Tech companies were already pushing this technology. #OpenAI (the #GPT / #ChatGPT maker) was founded in 2015; GPT-2 was released in early 2019. arXiv's charts don't show this, but I suspect these factors play a role in the seeming phase shift in their CS submissions in 2017.
We don't know what 2022 and 2023 would look like on a chart like this but I expect the exponential increase will have continued and possibly accelerated.
In any case, this trend is extremely concerning. The exponential increase in number of submissions to what is supposed to be an academic pre-print service is not reasonable. There hasn't been an exponential increase in the number of computer scientists, nor in research funding, nor in research labs, nor in the output-per-person of each scientist. Furthermore, these new submissions threaten to completely swamp all other material: before long computer science submissions will dwarf those of all over fields combined; since this chart stops at 2021 they may have already! arXiv's graphs do not break down the CS submissions by subtopic, but I suspect they are in the machine learning/generative AI/LLM space and that submissions on these topics dwarf the other subdisciplines of computer science. Finally, to the extent that arXiv has quality controls in place for its archive, these can't possibly keep up with an exponentially-increasing rate of submissions. They will eventually fail if they haven't already (as I suggested in a previous post I think there are signs that their standards are slipping; perhaps that started circa 2017 and that's partly why the rate of submissions accelerated then?).
Diversity of ideas is important. Sticking with a research program long enough to see it through is also important. Changing up what you're working on every time Silicon Valley ejects a new artifact that gets news coverage endangers both of those values.
And holy hell is the monotony boring. Computer science is an interesting, sprawling field with a lot going on! Let's keep it that way!
I'm aware that over the last year or so I've been a critic of #LLM hype so I too am reacting to it. Lately I've been considering changing that up.