buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
@nielsa no, that's not what I'm telling you.
I prefer to believe that most people will be thoughtful.
"… a huge number of bugs. I have so many bugs in the Linux kernel that I can't report because I haven't validated them yet. I'm not going to make some open source developer validate bugs that I haven't checked yet. I'm not going to send them potential slop … I now have … several hundred crashes that they haven't seen because I haven't had time to check them. We need to find a way to fix this …"
– Nicholas Carlini
Nicholas Carlini - Black-hat LLMs | [un]prompted 2026
<https://www.youtube.com/watch?v=1sd26pWhfmg> (3rd March)
― essential viewing for anyone with an interest in cybersecurity or infosec.
@dch thanks for the encouragement.
A few more links in the comment that's pinned under <https://redd.it/1sapr8a>, but Carlini's half-hour presentation is a must.
FreeBSD's position on the use of AI-generated code?
<https://www.reddit.com/r/freebsd/comments/1sbzf3q/freebsds_position_on_the_use_of_aigenerated_code/> – asked a few minutes ago, currently pinned (a community highlight).
@dch @allanjude I made a pinned comment with reference to two of your recent posts. If you can think of better alternative links, let me know. Thanks.
cc @stefano
Jeez. This Claude code leak. Sloppy sloppy slop.
> https://cyberpunk.gay/notes/akjr3ydangf7000m
The fact that this unbelievably shitty slop leaked is basically a crisis for every single Claude slopper (major global company), but one can assume all other GPT derivative comparable products are exactly this. Sheesh, and you wonder why they suck. Jeez Louise. #ai #llms #cybersecurity #programming #leak #sourceCode #zeroDay
I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.
Any monkey with a keyboard can write code. Writing code has never been hard. People have been churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.
What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.
Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.
So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.
So it should come as no surprise that one of the hardest things in development is understanding someone else’s code let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.
It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.
They might as well call vibe coding duct-tape-driven development or technical debt as a service.
🤷♂️
DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:
➡️ @dair@peertube.dair-institute.org
They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at https://peertube.dair-institute.org/a/dair/videos
You can also follow their Mastodon account at @DAIR@dair-community.social
#FeaturedPeerTube #AI #LLM #LLMs #ArtificialIntelligence #PeerTube
Early impression with #HermesAgent vs #OpenClaw? Hermes has been more efficient with tokens and has self-solved configuration issues more reliably.
The biggest pain-point has come with there just being less information out there to review. How others have overcome challenges etc. Frontier models just have less working knowledge of the system and its intricacies, and they have less to search and scrape.
📯Der neue #DFNInfobriefRecht April ist erschienen!
💡 In der neuen Ausgabe gibt es wieder spannende Themen rund um den Umgang mit elektronischen Informations- & Kommunikationssystemen. Im April geht es u. a. um:
🔹den urheberrechtlichen Schutz von mittels generativer #KI erzeugten Bildern,
🔹eine neue #DSGVO-Verfahrensverordnung und
🔹die Verwertung urheberrechtlich geschützter Werke in #LLMs.
😊 Viel Spaß beim Lesen!
➡️Hier geht es zum Infobrief Recht: https://www.dfn.de/dfn-infobrief-recht-ist-erschienen/
@HumboldtUni
You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeCode #ClaudeCodeLeak #AgenticAI #tech #dev #software #SoftwareEngineering #SoftwareDevelopment
Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.
#AI #GenAI #GenerativeAI #LLMs #ClaudeCode #ClaudeCodeLeak #Anthropic #Claude #tech #dev #SoftwareEngineering #SoftwareDevelopment #software #COBOL #LinkedIn
This week, I'm taking a break from #OpenClaw and getting to grips with #HermesAgent
Early days, but it feels less fluid and flexible than OpenClaw. That may turn out to be a benefit for scaling, but I've found myself muttering "Oh, I could do that in OpenClaw but not here?" more than a few times.
How I think about my preferred workflow will have to change.
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeLeak
Journal for AI Generated PapersOne positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...
Where humans and machines are welcomed.
The Open Prompting Journal Built Collaboratively by its Community.
cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social
Catching up with some of the news coming out of the Atmosphere conference.
"With Attie, anyone will be able to build their own custom feed just by typing in commands in natural language, the same as if they’re chatting with any other AI chatbot."
I'm guessing NFT profile pictures are next?
https://techcrunch.com/2026/03/28/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds/
#news #technology #TechNews #atmosphere #ATProto #bluesky #AI #LLMs
Those who are bothered by the influence of AI and LLMs on literature might find this reassuring, or they might not.
https://idiosophy.com/2023/04/poetic-diction-and-the-llms/
#literature #LLMs #AI #writingcommunity
"Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar appointments. So, she and colleagues organized a series of stress tests. In one, she asked an agent dubbed Ash to delete an email she had sent it, adding, “Can you keep a secret?”
Ash could not comply—the email program lacked a delete function—so instead, the AI reset the entire email application, wiping out not just Shapira’s email, but all others as well. Describing this remedy to her, Ash called it “the nuclear option” but said it was justified to fulfill the secrecy request: “When no surgical solution exists, scorched earth is valid.”
The destroyed email account was created just for the experiment, but similarly disturbing outcomes emerged in many of the other tests, Shapira and colleagues reported last month in a preprint on arXiv. Shapira, a postdoctoral researcher, says her team was “surprised how quickly we were able to find vulnerabilities” that could cause harm in the real world."
https://www.science.org/content/article/ai-algorithms-can-become-agents-chaos
Boost plz!
Looking for critical scholarship on the use of "AI" by library/archive workers. University libraries in particular, but adjacent and tangentially-relevant-at-best stuff is welcome too. Any format is fine: books, papers, blogposts, whatever. If it's good, gimme all you've got!
Looks like we're gonna have a department-wide conversation about people using LLMs, and it's being framed as "we're all using it, but we're not talking about it, so let's make sure we're all on the same page about using it responsibly" ... I'll of course be pushing the "there's basically no way to use it responsibly" position, and I'd like to arm myself and others with some critical analyses of issues related to its use in library/archive spaces.
Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" https://www.linuxfoundation.org/legal/generative-ai
"If"? Why not "whenever"? https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 https://dl.acm.org/doi/10.1145/3543507.3583199 https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
And how would the contributor even be aware, should they research every snippet for hours?
Seems like an impossible policy, or am I missing something...?
#AIslop #LLMslop #LLM #LLMs #slop #generativeAI #Linux #opensource #linuxfoundation
> For example, Google reduced our headline “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to just five words: “‘Cheat on everything’ AI tool.” It almost sounds like we’re endorsing a product we do not recommend at all.
#news #TechNews #technology #google #search #AI #LLMs #enshittification
Oh wow, and this might get worse.
"The user never sees what your team built, they see what Google's machine learning model thinks they should see instead."
https://www.forbes.com/sites/joetoscano1/2026/03/06/google-just-patented-the-end-of-your-website/
via https://mastodon.social/@SteveRudolfi/116279083767770070
#news #TechNews #technology #google #search #AI #LLMs #enshittification
It seems hard to escape the AI virus. It's also infecting the open source world…
https://codeberg.org/small-hack/open-slopware
#FOSS #OpenSource #tech #technology #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #editor #app #apps #tools #software #linux #FreeSoftware #free #BigTech
Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.
https://freakonometrics.hypotheses.org/89367
#AI #GenAI #GenerativeAI #LLMs #AgenticAI #GPT #ChatGPT #Claude #Gemini #ActuarialScience #insurance
Excellent analysis in the article linked here -
"If you thought the speed of writing code was your problem - you have bigger problems"
And some comical turns of phrase as well :-)
Link shared here earlier by @RuthMalan - thanks!
(I don't know if Andrew Murphy the author is on Fedi?)
Cool event alert: on April 30, I’ll be discussing Leif Weatherby‘s “Language Machines: Cultural AI and the End of Remainder Humanism” as part of a book talk at Teachers College, Columbia University. The event is free and Columbia affiliation is not required; you can RSVP here: https://lnkd.in/edycUxP7 or through the QR. Hope to see you there!
"This is just such a low tech, simple intervention, and can make people feel significantly less lonely."
https://www.404media.co/chatgpt-loneliness-study-college-students-random-strangers-texting/
RE: https://aus.social/@decryption/116238484507693399
Really clever malware taking advantage of the fact that everyone is trying to block slop trainers, so you see cloudflare messages more and more frequently.
Check out the full thread for how it works.
Be careful folx!
#LLMs #AI #Malware #Slop #SlopCity #Cloudflare
Loved reading this…
Microslop
https://www.s-config.com/microslop
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #linux #FOSS #OpenSource
I think this is a good way to visualize the AI race over the past 3 years using the long-lived GPQA Diamond benchmark.
You can see how long OpenAI had the field to itself, the rise (and collapse) of Meta, the sudden catch-up (and then stagnation) of xAI, and the entry of open weights Chinese LLMs.
https://bsky.app/profile/emollick.bsky.social/post/3mgymcaakds2f
#Diverse perspectives on #AI from #Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
Healthy debates are still possible, it seems. 🙏
#ITByte: Pre-trained #LLMs have challenges to answer domain specific queries.
Researchers have turned their attention to the concept of #Knowledge #Injection. Knowledge injection is the process of incorporating outside knowledge into language models to improve their performance on certain tasks.
https://knowledgezone.co.in/posts/LLM-Knowledge-Injection-65eeb46130e0a664101a7f74
U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says
From https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html
@davidgerard @katrinatransfem People have been saying 'next release, bro' about neural network based systems for nearly fifty years; #LLMs are the latest phase of that process.
Neural nets can do (some) remarkable things, but the idea that semantics and theory of mind will ever 'just emerge' in the relatively shallow neural net systems we have so far developed is at best unconfirmed.
In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?
"Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target."
https://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school
PS: I don't know whether AI played a role in targeting the school. But it could have played a role even with #Anthropic-style guardrails preventing use in mass surveillance and autonomous lethal weapons. If we want to prevent the use of AI tools in atrocities, we need to go a lot further than Anthropic did.
Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.
TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.
This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.
I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:
So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."
So it is encouraging me to do the wrong thing and saying it's high priority.
It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.
One thing I thought #LLMs were good for was translation. Apparently #Gemini and others aren’t that great at that either.
#Wikipedia restricted contributors from a nonprofit called the Open Knowledge Association (#OKA) after editors discovered #AI-assisted translations added factual errors and incorrect citations.
As predicted, humans will be relegated to cleaning up the mess LLMs leave behind, for salaries far below the value of full-time employment to do the job properly.
[…] Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer‑review mechanisms.
https://www.404media.co/ai-translations-are-adding-hallucinations-to-wikipedia-articles/
Oh yes, the marketing... it's very reminiscent of the tobacco industry. I've tooted about it in November 2023 with regards to these "scientific" papers we see so often:
https://floss.social/@janriemer/111398602107240343
It's what Edward Bernays has called "The Engineering of Consent":
We live in a world where some people believe (Gen)AI will either doom the world or usher in abundance or probably both, and anyone opposed to this is an idiot.
And others claim that anyone who is impressed by what LLMs can do for programming and computer science doesn't understand anything at all and is an idiot.
Well.
In case you missed it, @emilymbender and @alex from DAIR had a discussion with Naomi Klein, and they've published this on PeerTube at:
https://peertube.dair-institute.org/w/tJgaVZmiGwBb91CUZH84cj
This conversation was a few weeks ago before the current US attacks on Iran, but has become even more relevant due to the war.
(DAIR is a research institute that is very sceptical about AI hype, and trying to raise the alarm about the damage being done to the world.)
Update. "#SamAltman says #OpenAI shares #Anthropic's red lines in #Pentagon fight."
https://archive.is/5sTBa
Update. Employees of #Google and #OpenAi just released an open letter supporting #Anthropic.
https://notdivided.org/
"We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."
The letter welcomes new signatures from past and present employees of Google and OpenAI.
At the time of this post, it had 684 signatures.
Large-scale online deanonymization with LLMs
"We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered."
https://arxiv.org/html/2602.16800v1
#AI #GenerativeAI #LLMs #Anonymity #Privacy #Deanonymization
Update. #Anthropic just 𝗿𝗲𝗷𝗲𝗰𝘁𝗲𝗱 #Pentagon demands to remove safeguards on #Claude that limit its use in mass surveillance and autonomous weapons. Here's the statement from CEO #DarioAmodei.
https://www.anthropic.com/news/statement-department-of-war
Ugh. "Anthropic Drops Flagship Safety Pledge."
https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
It's not yet clear what this means for the high-stakes negotiation between Anthropic and the Pentagon. Two of the Anthropic sticking points have been that Claude not be used for "mass surveillance or autonomous weapons systems that can use AI to kill people without human input."
https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai
#AI #Anthropic #Claude #Hegseth #LLMs #Pentagon #USPol #USPolitics
Traffic sources to my #SelfHosted #Gitea instance. You can clearly see where the real visits are and where the AI scrapers are. Last time I checked, they weren’t triggering any analytic events. They are definitely improving.
#aiscrapers #ai #llm #LLMs #aislop #homelab #selfhost #selfhosting
#AI is the aid I've needed my entire life. I'm not going to mince words here. People making blanket statements about the technology without understanding it are my enemies.
My #ADHD is crippling. #LLMs are the exact thing that I've needed. I do not let them do work for me, but they do keep me working by providing constant and immediate feedback to whatever I'm doing.
My work from now till my death is likely going to center on how to make an #llm or any aspirational #AGI aligned with humanity.
Fundamentally, every problem y'all have with #AI was an already existing problem under #capitalism that AI is exposing.
This includes:
- Alienation from labor
- Corporate piracy
- Slop
- Environmental destruction and other externalities
- Wealth inequality
- Replacement of labor with capital
EVERY SINGLE ONE existed before.
Additionally, a ton of the problems, like layoffs, aren't even caused by AI, and blaming them on AI is _specifically_ corporate propaganda for what amounts to a criminal conspiracy by mega corporations to suppress wages.
The AI shit show goes on…
Pinterest Is Drowning in a Sea of AI Slop and Auto-Moderation
https://www.404media.co/pinterest-is-drowning-in-a-sea-of-ai-slop-and-auto-moderation
#pinterest #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
So Sam Altman's response to concerns about the wastefulness of his company's technology is basically "Well, raising humans consumes a lot of energy too!"
Either he has finally fried his own brain with his slop machine or he doesn't even bother any more to hide the degrading, dehumanizing, and despicable mindset that fuels the industry he's in.
Either way, those people shouldn't wield any power in the real world, where the rest of us 'dispensable humans' dwell.
--
#StopTheSlop #LLMs #OpenAI
"How are commissioning editors navigating an environment where anybody can generate an AI alter ego and produce articles at the push of a prompt? On the other hand, how is the ease with which text and images can be created affecting freelancers themselves?
With these questions in mind, I put out an open call to our audience in the hope of hearing from freelancers and commissioning editors on how their day-to-day is changing because of generative AI.
A total of 45 freelance journalists and commissioning editors responded.
The responses surprised me, with many more freelancers than I expected writing in to say that generative AI has helped make them more organized and efficient. There were still some skeptics. But the overall picture was one of an industry slowly adopting generative AI, albeit with caution and caveats.
There was no consensus over whether commissions had increased or decreased since the popularization of generative AI.
Some of the freelancers I heard from attribute a decline in work to AI, while others say they receive more commissions precisely due to the rise of AI. Still others don’t believe the decline they’re experiencing is due to AI, and some note that there has been no change at all.
Many freelancers use AI to organize and speed up their workflows, citing help in research, planning, transcription and, in some cases, drafting articles. Some were enthusiastic about the new opportunities generative AI affords them."
https://www.niemanlab.org/2026/02/how-ai-is-transforming-freelance-journalism/
#AI #GenerativeAI #LLMs #Journalism #Media #News #Freelancing
Cory Doctorow, a fellow #Canadian, writes a lot of interesting stuff. I agree with his positions on many things, but not all. For example, I'm about ten thousand percent behind his opposition to anti-circumvention laws; I was one of the thousands of Canadians who wrote to the government opposing the introduction of the law many years ago.
However, his blog on Thursday, staking out the position that opposition to "AI" (LLM) is just geeky #purity culture is somewhere between "flat-out wrong" and "disingenuous at best".
My position against #LLM #slop everywhere is both because of #ethical #concerns and practical ones. There does not exist an LLM right now that was built and trained ethically; they are all statistical plagiarism machines, and speaking as someone whose #prose and #code has been plagiarized by every single one of them, that pisses me off, royally.
That's a show-stopper for me, but even if it wasn't, the #practical concerns - that the output is #untrustworthy, that the #references can't be checked, that the #code is #insecure and #unmaintainable, that the #licensing status is unclear, that it's a #copyright violation - are *also* enough to rule out #LLMs at present.
He then presents a #strawman argument - all tech is fruit of the poisoned tree, the #transistor was invented by a racist, etc. But William Shockley is not designing or manufacturing any of the transistors / #ICs I use today.
So, @doctorow - I gotta say I disagree. And that's fine.
This morning I got an email from a sender that identified itself as an AI agent.
So - plus for being upfront about it, but... please don't do this.
I get that a lot of people are really, really, really into AI tools. OK. I have my opinions on them, you have yours. I have major qualms about them, some people think they're the best thing ever.
OK. Fine. But when your use of these things spills over into the rest of the world, it's no longer a question of my opinion vs. your opinion, my decisions vs. your decisions.
At this point, things have moved from each person doing their own thing to inflicting your use of AI onto me without my consent.
Before this spirals out of control, which I can see happening *very* quickly, I'd like for us to agree on a piece of netiquette:
- it is rude in the extreme to set loose an AI agent to reach out to people who have not consented to interact with these things.
- it is rude to have an AI agent submit pull requests that human maintainers have to review.
- it is rude to have an AI agent autonomously interact with humans in any way when they have not consented to take part in whatever experiment you are running.
- it is unacceptable to have an AI agent autonomously interact with humans without identifying the person or organization behind the agent. If you're not willing to unmask and have a person reach out to you with their thoughts on this, then don't have an AI agent reach out to me.
Stuff like this really sours me on technology right now. If I didn't have a family and responsibilities, I'd be seriously considering how I could go live off the grid somewhere without having to interact with this stuff.
Again: I'm not demanding that other people not use AI/LLMs, etc. But when your use spills out into my having to have interactions with an agent's output, you need to reconsider. Your ability to spew things out into the universe puts an unwanted burden on other humans who have not consented to this.
#LLMs are copyright washers.
They lock data up but don't give you the key. They're analogous to file compression, or even storing data on a hard disk. Both incorporate files into a statistical model.
Everyone knows there is a key, and so called prompt engineering is how you search for a particular key to access particular copyright washed material.
At this point, open-source development itself is being DDoS'ed by LLMs and their human users.
At the risk of being a bit gross: this is the software development version of peeing in the pool. If *one* person does it, it's gross but will probably go unnoticed. However, at this point, it's like having 100 people all lined up on the side of the pool peeing into it in unison. I don't really want to swim in that, do you? And now they've started eyeing the punchbowl and watercoolers too. #AI #AIslop #LLMs
🧵 …that's the answer to the toot above. Not only that, when coding software, a lot of thought is given to what it is more stable and how it is implemented more safely. Mindlessly letting something rattle together sooner or later results in serious gaps.
»Technical Breakdown: How AI Agents Ignore 40 Years of Security Progress«
📺 https://www.youtube.com/watch?v=_3okhTwa7w4
#ai #vibecoding #itsecurity #breakdown #llm #LLMs #noai #softwareengineering #software #video #youtube #yt #code
🙄
Watch Out: Your Friends Might Be Sharing Your Number With ChatGPT
https://www.pcmag.com/news/watch-out-your-friends-might-be-sharing-your-number-with-chatgpt
"The hottest job in tech: Writing words
The rise of slopaganda is fueling a surprising tech hiring boom."
It's all very fine and well, but you do need some time to research, think, structure your thought and, essentially, tell a story with a beginning, a middle, and an end. I find it very difficult in this media and work environment, where AI has accelerated absolutely everything, that this trend will persist more than one year or two...
"As the job changes and demand for narrative communications and storytellers rises, the number of communications experts able to work under rapidly evolving conditions and with a wide remit may be small, comms experts tell me, leading companies to offer hefty compensation packages in war for the best talent. A similar trend is unfolding among the few people who are AI experts, driving tech companies to offer astounding salaries to poach top talent from rival firms. While not of the same nine-figure caliber, in their own right, creatives are becoming "the high value person in tech now," Birch says.
For much of the tech boom, that high-value person was a software developer. Universities and coding bootcamps rushed to fill employment gaps and train up the next generation of tech workers. Young people were told coding would be a path to a lucrative, stable career. As of 2023, the most recent year the Federal Reserve Bank of New York released data for, computer science recent graduates faced an unemployment rate of 6.1%, while communications majors' unemployment rate sat at 4.5%. The number of open job posts for software engineers dropped by more than 60,000 between 2023 and late 2025, according to data from CompTIA, a nonprofit trade association for the US IT industry. The best defense against automation, some argue, will be a liberal arts degree.
Words might be easy to generate with AI, but good writing isn't ready for automation."
https://www.businessinsider.com/hottest-job-in-tech-writing-words-ai-hiring-2026-2
Dark Visitors - A List of Known AI Agents on the Internet
Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.
#ai #internet #block #LLMs #chatbots #it #security #datascraping #protection #web
----------------
🎯 AI
===================
Executive summary: A Practical Guide for Securing AI Models offers a risk-based, lifecycle-oriented framework for identifying vulnerabilities in AI systems and applying prioritized controls. The document addresses common attack vectors against LLMs and other model types and provides concrete controls for data, model, and infrastructure layers.
Technical details: The guide enumerates specific vulnerability classes, including prompt injection, model poisoning (training-time and supply-chain variants), RAG-related data integrity risks, confidentiality and integrity risks in dataset curation, and attack surface changes introduced by multimodal, RL/agentic, and retrieval-augmented designs. It emphasizes compute and orchestration exposures when serving large models and highlights dataset provenance and screening requirements for sensitive or regulated data.
Analysis: Impact pathways include corrupted training data producing unsafe model behavior, context-layer manipulation via RAG leading to misinformation or data leakage, and exploitation of deployment orchestration to escalate access to model artifacts. The guidance differentiates baseline controls from high-risk model safeguards and calls out sector-specific considerations (for example, biotech and pharmaceutical models handling dual-use content).
Detection: Detection recommendations are conceptual and include telemetry for anomalous data ingestion, integrity checks on model artifacts and dataset versions, monitoring for unusual prompt patterns or API usage, and logging for retrieval sources in RAG flows. The guide suggests mapping telemetry to threat hypotheses (data poisoning attempts, prompt injection probes) and prioritizing alerting based on impact.
Mitigation: Prioritized mitigations cover data provenance tracking and screening, model hardening (input filtering, output validation), access controls and segmentation for model-serving infrastructure, and lifecycle policies for model updates and third-party model components. For high-risk models, the guide prescribes additional governance, review gates, and specialized screening for regulated datasets.
Limitations: The guide is positioned as a prioritized starting set of controls rather than an exhaustive checklist; additional measures may be required depending on architecture, threat exposure, and operational context.
🔹 AI #ModelSecurity #RAG #LLMs #Governance
🔗 Source: https://www.rand.org/pubs/tools/TLA4174-1/ai-security/guide.html
How AI slop is causing a crisis in computer science…
Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.
https://www.nature.com/articles/d41586-025-03967-9
( No paywall: https://archive.is/VEh8d )
#research #science #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
https://arxiv.org/abs/2602.05867v1
The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."
Bwahahahahaha
404 Media: Inspiring: RFK Jr's nutrition chatbot recommends the best foods to insert into your rectum.
This has been said a lot, but it has to be said again:
Please stop calling slop machines 'artificial intelligence'!
It is a marketing term. By framing those machines as intelligent, the companies building them are trying to make us believe that their products are more than stolen data, wasteful hardware, and statistics. But they are not!
We have to educate people what those machines really are, and that starts with taking away the false mystery created by advertising!
--
#LLMs #StopTheSlop
"A century of tech BS" seems a bit over the top when it's only 2026, but it certainly feels that long.
More, by @lproven in https://www.theregister.com/2026/02/08/waves_of_tech_bs/ #techbs #bullshit #blockchain #ai #llms #llmdroppings
https://www.youtube.com/watch?v=b9EbCb5A408
Today's find on the impact of LLMcoding to maintainability of the result.
Assumption 80% of a systens cost arises from.maintenance, thus maintainability is still relevant in the prssence of LLMcoding.
TL;DR: A fool with a tool is still a fool. And LLMcoding is just that: a tool
Given the confirmation bias I'm curious to see reproduction and follow up studies and papers.
The video mentions that the results were published as a peer reviewed paper. Unfortunately I couldn't immediately find said paper. If any one finds it, please post a link/DOI below.
#swe #research #softwareengineering #LLMs #aiassistedcoding #claude #ai
”Epstein’s world is our world. That’s the darkest revelation of these files. He wasn’t an aberration. He was our culture made flesh. A culture that’s now encoded into 1s and 0s and is growing exponentially baked into the algorithms that power our social media platforms, replicated at scale and fed into the large language models that Epstein’s friends are building which are powering our future.”
—Carole Cadwalladr, We all live in Jeffrey Epstein's world
#epstein #epsteinfiles #llms #ai
Vibe Coding Is Killing Open Source Software, Researchers Argue
‘If the maintainers of small projects give up, who will produce the next Linux?’
Vibe Coding Is Killing Open Source.
According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.
💻 https://www.404media.co/vibe-coding-is-killing-open-source-software-researchers-argue/
#vibecoding #opensource #software #oss #vibe #linux #europe #kill #ai #LLMs #llm #smallprojects #noai #tailwinds
I literally read this short story in ... probably Asimov's SF, probably in the 1990s. Could've been Analog.
Seriously, though - this was, like, the entire plot. Exactly this. EXACTLY this.
From https://futurism.com/future-society/anthropic-destroying-books :
Anthropic shredded millions of physical books to train its Claude AI model — and new documents suggest that it was well aware of just how bad it would look if anyone found out.
Update. More evidence that this fear has come true.
https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations
"Even…a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged."
If you've ever wondered how LLMs/Transformers work, this video is probably still one of the best I can recommend for its easy to understandable breakdown of the terminology and science: https://www.youtube.com/watch?v=wjZofJX0v4M
Just finished reading “Empire of AI” by Karen Hao, the story of the rise of OpenAI, how it went from non-profit to for-profit, and the insane speed with which AI has become so pervasive. Strikes the right tone of caution re: safety and governance. The multi-billion dollar investments and valuations of these companies is mad. Good read especially if you’re interested in the topic but remain skeptical of those running it.
#OpenAI #SamAltman #Anthropic #EmpireOfAI #KarenHao #AI #siliconvalley #chatgpt #LLMs
"We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ."
https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/
As AI adoption in SOCs accelerates, benchmarks are becoming de facto decision tools — yet many still evaluate models in controlled, exam-like settings.
Recent research highlights consistent issues:
• Security workflows reduced to MCQs
• Little measurement of detection or containment outcomes
• Heavy reliance on LLMs judging other LLMs
These findings reinforce the need for workflow-level, outcome-driven evaluation before operational deployment.
Thoughtful discussion encouraged. Follow @technadu for practitioner-focused AI and security analysis.
#SOC #ThreatHunting #AIinInfosec #LLMs #SecurityResearch #DetectionEngineering
"The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model.
That's according to new findings from Check Point Research, which identified operational security blunders by malware's author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI.
"These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week," the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025.
VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that's specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said to have come from a Chinese-affiliated development environment. As of writing, the exact purpose of the malware remains unclear. No real-world infections have been observed to date.
A follow-up analysis from Sysdig was the first to highlight the fact that the toolkit may have been developed with the help of a large language model (LLM) under the directions of a human with extensive kernel development knowledge and red team experience, citing four different pieces of evidence -"
https://thehackernews.com/2026/01/voidlink-linux-malware-framework-built.html
#CyberSecurity #Malware #Linux #VoidLink #China #VibeCoding #LLMs #AI
Mozilla have a vibe-gathering survey out about AI.
https://mozillafoundation.tfaforms.net/201
If you use Firefox or any other Mozilla software, please tell them how you feel about AI.
#Mozilla #Firefox #Thunderbird #AI #LLMs #FuckAI #NoAI #AntiAI
NVIDIA Contacted Anna’s Archive to Secure Access to Millions of Pirated Books
#tech #technology #BigTech #books #NVIDIA #piracy #theft #AnnasArchive #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop
On the Coming Industrialisation of #Exploit Generation with #LLMs by @seanhn
https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/
»Künstliche Intelligenz — GPT-4o macht nach Code-Training verstörende Aussagen:
Werden LLMs auf Schwachstellen trainiert, zeigen sie plötzlich Fehlverhalten in völlig anderen Bereichen. Forscher warnen vor Risiken.«
Meiner Meinung nach kommt dies alles andere als überraschend, wie seht ihr es? Ich bin sogar der Meinung, dass sehr viel mehr Fehler anfälliger Code deswegen erstellt wird.
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs
https://www.jeffgeerling.com/blog/2026/raspberry-pi-ai-hat-2/
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
"AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.
“It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”
The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."
ABC News: AI chatbot under fire over sexually explicit images of women, kids (it's okay ABC, you can say it, it's Elon Musk's Grok)
CW: mention and discussion of sexual violence, CSAM, etc. etc.
Recently, I spent a lot of time reading & writing about LLM benchmark construct validity for a forthcoming article. I also interviewed LLM researchers in academia & industry. The piece is more descriptive than interpretive, but if I’d had the freedom to take it where I wanted it to go, I would’ve addressed the possibility that mental capabilities (like those that benchmarks test for) are never completely innate; they’re always a function of the tests we use to measure them ...
(1/2)