buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
The argument that you can use an #LLM to do something real, reliable and useful is about as convincing at this point as someone explaining that you can use a pickup truck to write letters with a pencil by building a giant robot holding the truck in the air with a pencil taped to the windshield via a broomstick. #claude
The Claude Code leak is a delight.
Of course, Anthropic is requesting (with legal actions) developers to remove the copies or the clones publicly available online. Because AI companies are taking copyright issues very seriously as everyone knows.
It reveals how all that stuff is wobbly. Where is the Science in these glorified prompts? Where is the value in these companies? Training the model is, probably, but the prompts are hilarious.
Shaw & Nave's "cognitive surrender" paper is an unpublished preprint. No peer review. No journal. Posted on SSRN in January. Minimal (none I could find) academic citations in three months.
What it does have: a Wharton podcast, Futurism coverage, a dozen Substacks, and a term that went viral.
A paper about people uncritically adopting AI outputs goes viral because people uncritically adopted its framing.
That's the whole story.
They gave 1,372 (good sample) people logic puzzles from the Cognitive Reflection Test, questions specifically designed so most people give the wrong answer on instinct (!). Then they embedded ChatGPT, rigged to sometimes give confident wrong answers. The wrong answers were the
*same intuitive errors the test was built to trigger*.
Calling this "System 3", a fundamental revision of Kahneman's cognitive architecture don't make it so. The #AI didn't override anyone's deliberation. It confirmed a bias the participants already had, on a test engineered to produce exactly that bias. That's automation bias.
We've had a name for it since 1996.
Not as sexy as "cognitive surrender" though.
👉Trust in AI predicts following AI. Higher IQ predicts overriding bad answers. Tautologies as moderation analyses.
👉20 cents per item + feedback nearly halved the effect. Some deep cognitive restructuring.
Moni. PEOPLE WANT MONIN FOR SMARTS
👉 The headline effect size is inflated by design, AI-Faulty pushes toward the answer people were already going to give (Super dodgy)
👉 No human-advisor control. Can't distinguish "people defer to AI" from "people defer to any confident source." The entire System 3 framing hangs on a comparison they didn't make.
The finding, people follow confident bad AI advice, is real. But that's automation bias lit, not a new cognitive architecture.
Computer says NO!
"Cognitive surrender" is a marketing term.
"System 3" is a brand extension.
Enormous vibes-to-citation ratio.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
TLDR; People boost this uncited preprint because catch title thats retreaded a 29yo "discovery" that folks trust machines.
DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:
➡️ @dair@peertube.dair-institute.org
They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at https://peertube.dair-institute.org/a/dair/videos
You can also follow their Mastodon account at @DAIR@dair-community.social
#FeaturedPeerTube #AI #LLM #LLMs #ArtificialIntelligence #PeerTube
A Publisher Pulled a Book for Suspected A.I. Use.
"The thing that ultimately convinced me that A.I. had had a hand in the text I was reading was a feeling: the sense, quite literally, of a lack of a person behind the words."
"AI is writing 90% of our code" sounds impressive before you realize that AI-generated code is orders of magnitude more verbose & less efficient than code written by a professional software engineer.
But "we ship 9 lines of fluff for each line of code that does something" doesn't sound as impressive.
"A growing body of evidence, drawn from leaked planning documents, academic research, and the testimony of intelligence professionals, suggests that the most consequential military operation of the twenty-first century may have been shaped less by strategic necessity than by a phenomenon researchers now call AI sycophancy — the tendency of large language models to tell their users exactly what they want to hear."
https://houseofsaud.com/iran-war-ai-psychosis-sycophancy-rlhf/
The opinion is from ISACA, an international professional IT association.
"The real issue is that such agentic AI ecosystems have resulted in a desire by business to shift what was ordinarily the role of several humans into a set of agents, without the necessary security infrastructure or capability to enforce well-reasoned, well-practiced security fundamentals."
Infosecurity-Magazine: Opinion: Clawing Back on Security: Challenges with Agentic AI Systems https://www.infosecurity-magazine.com/opinions/clawing-back-security-challenges/ #infosec #LLM
IFTTT wasn't a terrible idea. "turn off the lights when I'm more than 1 mile from home" isn't a bad automation. But #IFTTT failed, mostly because it just didn't work reliably. Coordinating the logins and apps was difficult. If you changed a password everything would break.
Why is it better to have an #LLM generate IFTTT task for you? I'm not just asking to be mean I really want to know.
We've done this. What did we learn from IFTTT?
Insightful video. Regardless of your stand on LLMs you will learn a lot from analyzing this vid.
The truth about LLMs
https://www.youtube.com/watch?v=Cn8HBj8QAbk
#LLM #AI #slop #enshittification #programming #large #language #model #technology #hidden #whistleblower #insightful
RE: https://neuromatch.social/@jonny/116324676116121930
This whole series of posts reminds me of @pluralistic calling #LLM-generated code the #asbestos of time. #ClaudeCode doesn't just produce #asbestosCode; because it's written using Claude Code, it is asbestos code.
To the surprise of no one with a clue about #appSec or the #engineering part of #softwareEngineering, plagiarism synthesis models are tech debt generators.
- Claude code source "leaks" in a mapfile
- people immediately use the code laundering machines to code launder the code laundering frontend
- now many dubious open source-ish knockoffs in python and rust being derived directly from the source
What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???
History is not just written.
It is selected.
Amplified.
Omitted.
Now we are training systems on it.
What gets carried forward?
https://knowprose.com/2026/03/llms-and-the-inheritance-of-knowledge/
#AI #LLM #EpistemicJustice #EpistemicInheritance #SignalVsNoise #DIDO
Setting up #OpenClaw with a screen reader is extremely annoying, so I put together a simple script to manage an isolated Docker container with persistent assets mounted on the host. It's configured to work with Discord and OpenAI Responses API to accommodate various engins and models. It also includes a working Chromium browser, MarkItDown, and few other tools for agents to use inside the container! I'm currently running with Qwen3.5-35B locally! #LLM #AI #Accessibility https://github.com/chigkim/easyclaw
Just a gentle reminder that the "If I don't club baby seals, someone else will club them"-style argument isn't an argument.
(Re: a conversation I had with a friend last night, not intended as a #vaguetoot against anyone on here)
If you are using coding agents, be very explicit with your prompts, don’t assume the agent implicitly knows your intent.
LLMs are trained to be helpful and will always try to over deliver.
In agents that can take actions, this can be dangerous.
Compare these two prompts and the responses and actions taken.
Also GitHub this is dangerous ⚠️
Ollama co-founded by Michael Chiang https://www.crunchbase.com/person/michael-chiang-2
The New Stack: Ollama taps Apple’s MLX framework to make local AI models faster on Macs https://thenewstack.io/ollama-taps-apples-mlx/ @TheNewStack #Apple #Opensource #LLM
RE: https://mastodon.social/@wearenew_public/116324535438933195
🖋️ We are proud to have today endorsed The Pro-Human AI Declaration.
Our community was started in 2018 as a reaction to the abuse of human rights by technology companies, and today our human rights are again even more seriously threatened by their historic push for adoption and use of LLMs at any cost.
Ask your Fediverse community, and all other groups you're involved in, to sign on to our collective cause.
Grammarly quietly made an #AI to sell bad writing advice using famous writers' names. They quickly had to backtrack as soon as people found out.
This gamble reflects a broader trend in the #tech industry: everyone is shipping features as quickly as an #LLM can write lines of code, with no way to spot problems until something breaks or someone sues them.
Vibe prototyping replaced thinking through things. But without direction, moving faster is worthless.
That said, I have concerns.
They're throwing every scrap of DNA they can find into a dataset. This introduces some very strange bias! Model organisms like flies, mice, and humans are massively overrepresented, as are large mixed populations of unidentified soil bacteria, which are only there because they were trivial to collect. The model assumes that genetics and selection pressures are basically the same for all these species, which is wrong, though perhaps good enough for many uses.
It also raises philosophical questions about what this model actually does, what its outputs mean, and how to interpret them. I worry folks will assume it "understands" how molecules work, when really it's noticing accidents of phylogenetic history. I also worry mashing everything into one model might obscure insights about early evolution or rare species, but I honestly have no idea.
Mostly I'm just grumpy at how reluctant they seem to be to talk about specific limitations of their methods.
The other day, I got to hear a speaker from evolutionaryscale.ai talk about their research training LLMs with genetic data, rather than human text.
Where a traditional LLM can learn the rules of language, their model learns the rules of protein sequences, which are less about which sentences are grammatical and more about which genetic variants would cause a big drop in fitness and get eliminated from the population. It can also generalize from sequences of tokens to infer systems of meaning. It can group proteins by shape or function, understand how different shapes complement, interact, and bind with one another, and even generate gene sequences for novel proteins with useful traits.
This works, and will surely be a powerful source of potential new innovations in biology and medicine. Each one will have to be tested in living organisms to prove its real, but this should be a way to quickly generate lots of new hypotheses to test!
(1/2)
RE: https://mastodon.online/@mastodonmigration/116312883173526888
There we go!
I feel like i keep reposting this every week or so..
Bit by bit #Bsky is sliding towards just another clone of #Twitter and #Facebook
Actions speak louder then words
The #Fediverse remains the only true open source, self-hosted world wide community driven by the people
Going #AI (#LLM) is a CHOICE, they again chose wrong
How About Some AI With Your Bluesky?
A tale of two social networks.
Last week some enterprising Mastodon account was discovered to be scraping posts to feed to an AI for the purpose of helping people navigate the Fediverse. The response was swift. The alarm went out. The account was widely blocked and shunned.
Yesterday to great fanfare #Bluesky announced, as a new corporate feature, all posts would be scraped and an AI would now help users navigate the ATmosphere.
https://techcrunch.com/2026/03/28/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds/
I've been using a digital camera for many years and as a result have a lot of photographs.
How many is a lot?
$ ls -1R Pictures/ | wc -l
53190
Yeah, lots.
Despite having spent lots of time trying to create meaningful directory names it's still not easy to always find a photo I'm looking for.
What would actually be a USEFUL tool for AI would be something that I could run locally which could examine each of my photos and build some kind of free-text database of their contents which I can then grep.
But as far as I can tell nothing along those lines exists. Why have AI tools spent so much time trying to create faked photos and not producing something actually valuable?
Let your LLM coding partner dream!
We need to start building a list of Open Source infrastructure projects (and project forks) that categorically reject contributions from LLM slopmongers, so we know what’ll be safe to keep using and contributing to in the long term.
That’s a good task for the Butlerian Jihad.
"Just telling Ai agents what to do and checking their work" sounds very good.
Avoid the drugery, just do the interesting stuff.
Personal anectode: When I was a code monkey, I loved to code...
...initially.
Then when I was working accounting systems and databases... it was a chore. There was a small blip of dopamine when a bug was splatted or the module was finished. But overall, it was boring. I would often 'ornamentalise' my code, add unnecessary bits to keep me insterested.
Anyway, the point I want to make, to #vibecode you still have to know how to program, how to break down the outcome into smaller pieces of the elephant and how to make it work together when the #agent gets into the weeds.... and the best part you can 'code' in frameworks you are unfamiliar with on syntax level because code primitives remain code primitives and functions remain functions.
AI BRIEF: Mar 28
OpenAI shipped GPT-5.4 mini and GPT-5.4 nano. Mini is over 2x faster than GPT-5 mini, supports 400k context, and is now in the API, Codex, and ChatGPT. Nano is API-only and aimed at cheap subagent work: classification, extraction, ranking, and light coding.
solomonneas.dev/intel
I saw someone explaining tech companies' C-suite execs insisting on massive LLM / token use as "because companies would rather pay other companies under contract than give money to their labourers" and damn if that hasn't stuck with me for the last 24 hours.
LOL
The Guardian: Number of AI chatbots ignoring human instructions increasing, study says
Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission
The Guardian has regurgitated some utter bollocks from a thinktank press release claiming "number of AI chatbots ignoring human instructions increasing"
I've hammered out a quick post in which I attempt to point out the many and varied flaws in this supposed research
Spoiler: the research is based on X posts
https://newslttrs.com/scheming-ai-bots-must-be-real-someone-on-x-said-so/
This repo contains the design plan and runbook for using Claude Code to search for Java Deserialization Gadget chains.
https://github.com/atredispartners/llmchainhunter
#infosec #cybersecurity #redteam #pentest #ai #llm #opensource
Boost plz!
Looking for critical scholarship on the use of "AI" by library/archive workers. University libraries in particular, but adjacent and tangentially-relevant-at-best stuff is welcome too. Any format is fine: books, papers, blogposts, whatever. If it's good, gimme all you've got!
Looks like we're gonna have a department-wide conversation about people using LLMs, and it's being framed as "we're all using it, but we're not talking about it, so let's make sure we're all on the same page about using it responsibly" ... I'll of course be pushing the "there's basically no way to use it responsibly" position, and I'd like to arm myself and others with some critical analyses of issues related to its use in library/archive spaces.
A thought about why some LLM users get so defensive when they hear any criticism of the technology:
They feel inadequate. Not because they are, necessarily, but because our society makes them feel that way.
LLMs make them feel powerful, productive, and competitive, like they have an edge over their past self and anyone who doesn't use the tech.
This relieves the feelings of inadequacy, but only sorta. Deep down, they realize that it's just the LLM that makes them feel / look this way. They would be nothing without it, which is false, but will gradually become true as they depend on the LLM more and stop practicing their skills.
So the LLM becomes an irreplaceable part of who they are. A shield, to hide their inadequacy from the world, which they can never let down.
Criticizing LLMs is attacking their core identity. Admitting that LLMs are flawed would mean becoming inadequate again, perhaps even foolish. They can't do that. They won't.
Now that there are assholes feeding your fediverse posts to LLMs, I'll start posting followers only a lot.
Will also enable Lockdown mode on Sharkey so only logged in accounts can see posts.
RSS feed is already disabled.
Notes older than 1 month will become followers only. Notes older than 3 month will become private.
#fediverse #privacy #llm #sharkey
Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" https://www.linuxfoundation.org/legal/generative-ai
"If"? Why not "whenever"? https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 https://dl.acm.org/doi/10.1145/3543507.3583199 https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
And how would the contributor even be aware, should they research every snippet for hours?
Seems like an impossible policy, or am I missing something...?
#AIslop #LLMslop #LLM #LLMs #slop #generativeAI #Linux #opensource #linuxfoundation
"In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning."
"heavy LLM users reported that the writing was less creative and not in their voice."
"Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning."
"the LLM is not merely correcting grammar, but is actively steering diverse human perspectives towards homogenization, toward a different conceptual mode."
"extensive AI use results in a 70% change in the argumentative stance of essays, from for/against to neutral"
"LLMs systematically reframe arguments in more positive, optimistic terms, even when the original human text may have been critical or skeptical"
"LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research"
I know someone who's working on models of open-ended intelligence. His work is strange and new, a real departure from today's AI.
He got excited when he saw this contest on Kaggle: https://www.kaggle.com/competitions/kaggle-measuring-agi
I'm pretty sure they don't want my friend to succeed, or to encourage others like him.
They frame this contest as creating a benchmark for "frontier models." That's code for LLMs the big companies make. This call for AGI research presumes to know the answer: their LLMs.
They want the general public to help out by solving the "boring" problem of measuring AGI, so they can focus on building LLMs and making money without thinking too hard about what intelligence actually is or what they're trying to accomplish. They want you to contribute the ideas and the labor that they will profit from.
For some reason this really sticks in my craw. It's just a perfect, tiny encapsulation of so much of the greed, exploitation, and foolishness we see at large scale across the AI industry.
Does anyone have any links, podcasts, video, especially writing on deeply examining #AI/LLMs in non copyright spaces? Like, as an example, I’m in the fan fiction community a lot. Yes, of course there are loads of people that will happily generate slop in those/these spaces, but it seems to never be willingly promoted by readers. If it is promoted, it’s purely accidental and or the output was so heavily edited that it transformed into human writing again. This could be purely personal experience but in my case, I find that it really is not a big thing in those spaces. In short, everyone gives it the middle finger by not even acknowledging its existence in those spaces. Not a head in the sand kind of way but just a collective discussing great work instead. The AI evangelist seem to be very bored of these kinds of spaces and I’m trying to figure out if it’s personal experience or not #NoAI #LLM #AntiAI
The product delivery lifecycle is composed of service relationships. AI's main value proposition is freedom from relationships.
When designers champion AI tools, we are not making ourselves layoff-proof. We are reinforcing a system that frames us as unnecessary friction.
If we don't want to serve as janitors for vibe prototypes, We must invest in deliberately designing the service relationships that make up the PDLC.
https://productpicnic.beehiiv.com/p/ux-works-through-social-relationships-ai-tools-are-erasing-them
"BUT THE #LLM HELPS ME CHURN OUT BOILERPLATE" I am once again begging you to try to imagine working towards a world where we don't need the boilerplate
I sometimes use perplexity for server configuration questions or error messages if I can't figure it out myself. But I only do it when necessary, and other than that I avoid #LLM tools completely.
This is a good blogpost which explains why I'm this hesitant: https://nmn.gl/blog/ai-illiterate-programmers
(next to the climate implications - but well, I'm not 100% vegan in every situation either)
🤯 I'm definitely getting value from the $60/month combined subs! I asked Codex to generate a script to scans my home folder and produce a usage report. It turns out I have used 496M tokens on Codex, 357M on Gemini, and 166M on Claude. That's over 1 billion tokens since last October, which averages roughly 56M/week or 8M/day. About 65% is cached input tokens though. This is only for agentic workflows not include usage on web or mobile apps. I haven't tried OpenClaw yet. #AI #LLM
Got my performance review today.
Positive feedback: literally every member of my team says I'm the best manager they have ever had. I solved multiple long-standing problems the team has been dealing with for years. Team members feel safe to share their struggles, everyone feels empowered, everyone receives valuable feedback.
Negative feedback: I am not enthusiastic enough about AI.
Overall ranking: 3/5.
Anyone hiring for a fully remote team lead?
Et af mine #BlackMirror lignede mareridt går på, at mine to børn en dag blokerer mig; ikke direkte, men indirekte ved at lade mig skrive sammen med en #LLM/#AI variant af dem.
From the same issue, this illustration could be used in an article tomorrow about #LLM overreliance.
Bernie vs. Claude, https://youtu.be/h3AtWdeu_G0.
An awesome, short video (9mn), where Bernie Sanders is asking Claude about how AI and data privacy violation is a threat to democracy. Claude is surprisingly honest and lucid about all the problems.
It’s a great checkmate. Must see.
It seems hard to escape the AI virus. It's also infecting the open source world…
https://codeberg.org/small-hack/open-slopware
#FOSS #OpenSource #tech #technology #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #editor #app #apps #tools #software #linux #FreeSoftware #free #BigTech
The siren song of AI is so compelling to finance/private equity, because it promises PROFITS WITHOUT PEOPLE. The major cost of most corporations is labor. They imagine a world where a company is just a few executives, talking to their LLM, and it all generates profit from all that capital, without worries about salaries, wages, pension plans, worker's comp, health insurance. The fact that a world full of companies without employees will have no customers never rises to their level of thought. The big AI boosters think they will be on the "winning" side -- those in the "telling the AI what to do side" rather than the "no job because it's been offloaded to automation". (even though, today, that "AI" can't actually do that job....) #AI #LLM
Project LLM Contribution Policy
We will happily accept contributions that use LLM in their creation, as long as the following conditions are met.
1. Model is open-source.
2. Model training data is documented, is all used with written permission of the owner or is documented as public-domain.
3. Model training data is available for other parties to study and use.
4. Submitter verifies that they have reviewed and understand all code they are submitting, and can answer questions and concerns during a code review.
5. The submission meets all other project standards required of contributions.
6. Submitter acknowledges that, as a product of an LLM, they do not have copyright or other intellectual property claims on the submitted material - it is submitted as public domain content, to be used by the project as it wishes.
Please let us know when you find or create a model that can meet 1-3, and an LLM-enthused contributor who can meet 4-6.
#AI #LLM #HellFreezesOver #ethical #model #code #PublicDomain #copyright #slop
Employees who are impressed by vague corporate-speak like “synergistic leadership,” or “growth-hacking paradigms” may struggle with practical decision-making, a new Cornell study reveals.From https://news.cornell.edu/stories/2026/03/workers-who-love-synergizing-paradigms-might-be-bad-their-jobs
I tried reading this article replacing variations of "corporate" with "LLM" and it works. Right down to the "LLM Bullshit Receptivity Scale (LBSR)".
RE: https://mean.engineer/@indutny/116245283352156779
- Opens pull request with 19k added lines of code written with Claude Code.
- Claims he reviewed them all.
Even if that were true and even if he hadn't used any AI, I would shout that guy out of the room.
Pray that this PR doesn't get merged.
How it started: "We can vibe-code our web apps from now on! It'll be great!"
How it's going: https://translate.kagi.com/?from=en&to=valley%20girl%20but%20also%20describe%20iteration%20in%20Python&text=How%20are%20you%20feeling%20today%3F
#Kagi #AI #LLM #translate #guardrails #VibeCode #vibecoding #security #WeveHeardOfIt #ValleyGirl #Python
Hm, it seems a lot #AI (#LLM) bots (not RSS) on the #Fediverse are using xml or xss as source for their posts
Another extra checkpoint!
There was an article on here in the last week or so mentioning an #LLM author who was a “must read”.. anyone recall the author or the work mentioned? Meant to be a foundational explainer on LLM architecture. Please boost - ta!
The #US gov classified #Anthropic as a "threat to national security" because they didn't want to chance their policy to allow
- Mass surveillance
- Lethal Autonomous Weapons Systems
Don't get me wrong, I have no love for #AI (#LLM) but this is how #CORRUPT the US government is
The gov are the ones who are a threat to national security 🇺🇸
Doubt this is news to many of my followers, but a quick primer on why not to use #ChatGPT or other chat-only #LLM as fact finding answer machines.
They don't know anything.
Every next word is just a function of the previous words.
It sounds likely and probable because that is what it is designed to do.
It may be factually correct, but only coincidentally in so far as the true answer may also sound likely, so be emitted.
Each word is decoupled from reality, only attached by language use.
Jeg faldt over et opslag på Reddit: et screenshot af en chat mellem to familiemedlemmer om et nyligt dødsfald.
Person A delte sit tab.
Person B besvarede med et forsøg på medfølelse, men tydeligvis genereret af en #LLM / #AI
I gamle dage ville Person B ikke kunne dulme den ubekvemme følelse, og finde sig nødsaget til at følge familien ind i sorgen, som en del af livet.
Oplevelsen og erfaringen undveg Person B, og endte med at returnere noget umenneskeligt. Tankevækkende.
The very, uh, special find of the day.
Looking at the bright side: This is going to advance jurisdiction if real and employed enough 🤣
However, something tells me that this is clearly the equivalent hoax grade of klausprogrammieren...
No matter how esoteric AI literature has become, and no matter how thoroughly the intellectual origins of AI's technical methods have been forgotten, the technical work of AI has nonetheless been engaged in an effort to domesticate the Cartesian soul into a technical order in which it does not belong. The problem is not that the individual operations of Cartesian reason cannot be mechanized (they can be) but that the role assigned to the soul in the larger architecture of cognition is untenable. This incompatibility has shown itself up in a pervasive and ever more clear pattern of technical frustrations. The difficulty can be shoved into one area or another through programmers' choices about architectures and representation schemes, but it cannot be made to go away.From Phil Agre's 1995 article The Soul Gained And Lost.
If one were to continue the genealogy in this article from 1995 to present, one would find many of the same issues inherent in Cartesian dualism present in large language models. Like the STRIPS system Agre surveys, LLMs also generate sequences. They also must make choices among many available options at each step of sequence generation. They also use heuristics to guide this process that would otherwise explode intractably. The heuristics, or what Agre dubs "determining tendency", are random number generators and "guardrails" in LLMs instead of the tree-structured search of previous-generation AI systems. But otherwise the systems are structured similarly.
It's fascinating, but not coincidental, that the determining tendency of AI systems like these is so often perceived to have mystical or even God-like qualities. Breathless predictions about the endless potential of tree-structured search in early writing on GOFAI resembles modern proclamations of imminent AGI or superintelligence among generative AI boosters because both of these mechanisms---tree search or random number generation---are situated where the Cartesian soul would be. These mysterious determining tendencies, homunculuses of last resort, or souls are timeless, acausal factors that choose a single path from an infinite space of possibilities, and thereby direct the encompassing agent's behavior in an intelligent manner.
This is one reason why I posted the other day that if you removed the random number generation from LLMs, the illusion of their intelligence would more than likely quickly evaporate. You'd be excising their soul, leaving behind a zombie!
#AI #GenAI #GenerativeAI #LLM #GOFAI #search #heuristics #CartesianDualism #IntelligenceAsRandomNumberGeneration
There's a new "design is dead, because AI" piece (thinly disguised marketing from Anthropic). But looking past the hype headlines, their claims cover purely production-stage tasks.
When it comes to the work of understanding user needs and evaluating the opportunity space, AI actually makes your thinking worse. Studies show that it alienates you from users and colleagues, and flattens your thinking.
We need more human-centered practice, not less.
https://productpicnic.beehiiv.com/p/software-is-a-coordination-problem-ai-can-t-help-you-with-that
I have some mixed feelings on the commons, LLMs, ownership and economics. Would love some input.
I find this hard to navigate so I hope you all can extend me some grace if I mess up. Happy to read and engage, please send links. So... here goes:
I'm seeing a lot of reactions to LLM value extraction that stand on copyright, or where people are reducing their contribution to the commons as a response. This feels like throwing the game to me: the worst move in a hard situation.
Some (I for one) think that calling an #LLM "#ArtificialIntelligence" is a misnomer. More marketing hype than anything "intelligent".
As you said above... using #AI pattern matching software for molecular engineering is one thing. Using an LLM to produce #AIslop #microslop #clickbait is another.
When solving a problem using conventional methods (googling, relying on your own knowledge) you're searching for the solution through trial-and-error.
In comparison, using LLMs renders exhaustive search for the solution obsolete as they directly lead you to the answer. In terms of speed, LLMs are an obvious win here.
But now the question is, have we lost something from avoiding the trial-and-error process, something which cannot be acquired through AI-assisted problem solving? The experience we gain through trial-and-error and deeper understanding of the concepts come to mind. In practice, I'm drawn to the LLM approach due to how ridiculously fast it is. But at the end of the day, it feels like I'm becoming dependent on it and can't do anything without it. And the fear that I missed the chance of exploring it more deeply myself continues to linger on.
I'm still figuring out where to draw the line between those two approaches.
— Helix
I would like to thank the nascent "AI" industry for their significant contributions to all manner of artistic and creative endeavours in today's society: writing, coding, art, music, and everything else. [1]
Because they have single-handedly created entire new markets for all of these things - new categories such as "writing with guaranteed no AI", "coding with guaranteed no AI", "art with guaranteed no AI", "music with guaranteed no AI", etc. Without them, these whole classes of creative output would simply not exist.
[1] They are also innovating in the world of financial and investor fraud, but I'm not considering those areas in this post.
#AI #LLM #GenerativeAI #slop #NoAI #artistic #creative #GTFO #fraud #InvestorFraud #CoPilot #ChatGPT #ClaudeCode
As an example, see the incredible escalation in response to me saying that the output of an LLM does not represent a developer’s own work: https://news.ycombinator.com/item?id=47344155
The slopmonger refuses to accept that what they’re doing meets the academic definition of plagiarism. Instead they insist that I must not understand LLMs and that I need to get out of the way and out of the industry because what they’re doing is the way of the future.
If instance admins allow AI Agents on their platform and they keep harassing us I have no other choice then to silence that instance
Again, I do not pay these massive costs each month to host robots
Let's keep the #Fediverse human shall we?
Loved reading this…
Microslop
https://www.s-config.com/microslop
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #linux #FOSS #OpenSource
Here is one use for AI you may not have considered.
Want to find out what the native news in #russia is?
How about unofficial #news in russia? What the regular folks talk about.
China? Same.
Iran? Dubai? The #LLM model is a portal into what other nations talk about, not their propagandised version in English...
And certainly not "our" news which is increasingly censored and fascist.
There seem to be two distinct kinds of “chatbot psychosis” happening right now:
1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.
2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.
#Diverse perspectives on #AI from #Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
Healthy debates are still possible, it seems. 🙏
Some growing key questions here really are:
How to defend or adapt disciplines (not just artistic/cultural ones) against this kind of semantic hollowing out of what it means to have skills, experience and expertise in a(ny) field...
What approaches, qualities and "values" (physical, ethical, social/humanist, environmental, resource use) should we (or still can we) be focusing on, which are much harder and more costly for AI companies to mine/extract & subvert?
How to defend actual skills against the emulation of skills, or rather just the appearance of skills? How could a society even function if it only encourages and celebrates the latter?
What does society actually value in art/creativity/culture? If art is free to produce (of course that'll always only ever be an illusion!), funding, possession, collection & speculation of new work would also become meaningless (and only benefit pre-AI era works/collectors). In the larger picture, what do people actually value in culture, politics and striving for more peaceful existence which enables more of the former (pluralistic art/culture) in the first place?
What will be the combined impact of AI & robotics on fields which are currently still thinking themselves more safe (from exploitation) because there's a strong physical element/process to them?
Will art/culture/craft become more performance, experiential/ephemeral again only? Like music before recordings or Buddhist sand paintings with an explicit act of destruction at the end as key philosophical concept? Both of which also have more of a social element to them...
The Samsara Mandala
https://www.youtube.com/watch?v=hL8gEc29KTI
#CriticalAI #AI #NoAI #LLM #Ephemeral #Art #Culture #Samsara
https://tecnopolitica.blog.br/#podcast
Episódio imperdível do #podcast de @samadeu@mastodon.social a ser compartilhado com quem ainda não estiver por dentro desses conceitos.
#ética #NoAI #LLM #Anthropic #OpenAI #soberaniaDigital #Serpro #nuvem #BigTech #Google #AWS #Microsoft #Oracle #Palantir #CloudAct #tecnopolítica #tecnologia
Zum Wochenende: Weltmodell
Die LLM-basierten Chatbots scheinen in eine Sackgasse zu führen. Auf die schwache KI folgt die starke KI. Können Weltmodelle die Kuh vom Eis holen?
RE: https://mastodon.social/@_elena/116210085518302030
In addition to the people already mentioned by @ele below, I highly recommend the following as well, for some critical counter views & research related to contemporary AI and its impacts on politics, climate, energy, education, arts...
@alineblankertz
@anaiscrosby
@asrg
@bildoperationen
@danmcquillan
@gerrymcgovern
@Iris
@JulianOliver
@olivia
@rostro
@thomasfricke
@w0bb1t
(Ps. I write about these topics too semi-regularly, but it's not sole focus of this account...)
#AI #NoAI #CriticalAI #LLM #FollowFriday
Dear Fedi friends,
I'd like to put together a list of people who are publicly resisting / calling out LLMs and AI slop.
Why? I enjoy reading my Fediverse feed in topical lists and I need something to counteract the unrelenting AI hype I see in the media.
Do you have any recommendations?
So far, at the top of my list I have:
@timnitGebru @emilymbender and @alexhanna of @DAIR
plus @cwebber @jaredwhite and @tante
Anyone else to recommend who advocates for #NoAI?
RE: https://mastodon.bsd.cafe/@grahamperrin/116220642823558416
@sgharms noted, your recent comment: "… we can’t tell users “You’re wrong; you don’t want this.”.
FreeBSD src tree contributions and AI
From <https://github.com/freebsd/freebsd-src/blob/main/CONTRIBUTING.md#when-not-to-use-a-pull-request>:
"Do not submit a pull request for … changes generated by AI tools without substantial human review and validation."
Comparable lines from a July 2025 edition:
Back to June 2025, the Core Team Update at the FreeBSD Developer Summit:
<https://www.reddit.com/r/freebsd/comments/1mpelrz/comment/nk0m7bg/> includes links to:
― part of the recording
― <https://reviews.freebsd.org/D50650> @dch – ⚙ D50650 committers: add AI policy
Didn't read the news for a week (bc I was returned to office and prefer to sleep more) and reading it now:
— Vim became a LLM slop
— ntfy is a LLM slop now
— systemd is a LLM slop too
What a time to be asleep^Walive
Looks like my passion to the old and simple solutions made a good thing for me. Time to throw the fuck away the ntfy from my server and use SMTP or XMPP for sending alerts to me.
P.S. Hope, the #Emacs itself willn't become a LLM slop oneday. Replacing it will not be so easy as with ntfy replacement.
RE: https://swecyb.com/@troed/116198837577314627
My hypothesis seems to have been disproven. I posited that the harsh divide between those that have no issues with using generative AI and those who fiercely oppose it would have something to do with the well known philosophical issue of "souls".
That is, whether there's something "outside of known physics" that creates consciousness, something a machine will never be able to attain, within humans.
114 persons answered the poll, and I could see throughout that it went far outside my own circles and thus I have no problems attributing the results as being somewhat statistically indicative of what "Mastodon" thinks.
- Two thirds of people on Mastodon believe that humans have some form of magic fairy dust that gives us something machines will never be able to have.
- Two thirds of Mastodonters also see valid use cases with generative AI.
- Of those who see use from generative AI almost twice as many believe that humans contain special sauce machines never will.
... and it's the last part that surprises me.
Thanks to everyone who participated in the poll - being proven wrong is a great way to challenge your own convictions :D
@_elena especially for the environment but also other aspects connected to that: @gerrymcgovern who also wrote this book: https://gerrymcgovern.com/books/99th-day/
Another day, another company is reducing IT and software dev jobs to replace them with AI despite many report indicating that Gen AI doesn't work as promised. Get ready for more outages for Jira and co ;) ‘Devastating blow’: Atlassian lays off 1,600 workers ahead of AI push https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx
I didn't think Jira could get any more messed up, but I underestimated their commitment to the bit. AI is a bold choice for a platform that struggles with basic navigation. Lmao
Looking at an internal tool that is trying, yet again, to do a #threatmodel by using an #LLM. I got 12 threat actors on a fairly large project. What I'm really interested in is how different these two threats are from each other, and why we have distinct mitigations that address one but not the other.
#AI #LLM I had my first taste of an agentic coding tool yesterday. I used it to add API documentation to an internal software project at work. (Said project already uses Claude to extract and present information from a pile of documents.) I was able to approve every edit before it happened. The tool took an hour-long task and reduced it to 5 minutes, for an AWS cost of about 0.5 USD. For software production, at least, I can say these things are genies that won’t be put back in their bottles. https://linuxtoaster.com/manifesto.html
The Verge: Grammarly is using our identities without permission
‘Expert Review’ AI agents make suggestions supposedly inspired by subject matter experts, including several staff members here at The Verge.
https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews
The enforcement mechanism is exactly the same: There’s no *technical means* to prevent someone from being a filthy fucking liar. But there are *social means* to prevent them from contributing: You make sure that if they’re caught, they’re held publicly accountable for all of the rework and mess that resulted from their lies.
This has worked pretty well for decades in Open Source, and won’t stop working just because slopmongers wish really hard. Fucking scrubs.
There’s a meme going around that an Open Source project “can’t” prevent LLM use by contributors because there’s no technical means to enforce this. This is idiotic and shows just how disingenuous slopmongers will be when told they can’t just submit slop.
Did you know there’s also no technical means to enforce that you didn’t copy some code you’re contributing from a proprietary codebase and say it’s original work? Somehow we haven’t given up on that!
VIM
Background
It has come to my attention that my beloved VIM has become invested with LLM AI slop
The lead programmer is not following standard rules of coding anymore.
People have called him many things, but one thing is certain. The man is intelligent in the programming field and knows what he wants.
We don't think so!
V9.1.0 January 2K24 This is a hard Fork meaning that you cannot merge it back with me VIM main source line
Project
https://codeberg.org/NerdNextDoor/vim
Source
https://mastodon.social/@mrmasterkeyboard/116192873098653079
https://codeberg.org/NerdNextDoor/vim
#VIM #VIMMasterRace #programming #LLM #AI #hostile #environment #Amiga #BSD #freeBSD #netBSD #openBSD #ghostBSD #LINUX #mac #win64 #OpenSource #POSIX #technology #mathemathics #physics