snac.daltux.net is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Talvez uma alternativa soberana e popular em IA passe por soluções abertas, auto-hospedadas, descentralizadas e baseadas em computação heterogênea (independente de GPU e CPUs específicas)
E tem vários projetos interessantes em andamento.
Exo IA p2p
https://github.com/exo-explore/exo
Multicortex SO para IA
https://github.com/cabelo/multicortex-exo
GPUStack
https://github.com/gpustack/gpustack
VLLM
https://github.com/vllm-project/vllm
#IA #llm #selfthosted #descentralização #CodigoAberto #SoberaniaDigital
I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
The argument that you can use an #LLM to do something real, reliable and useful is about as convincing at this point as someone explaining that you can use a pickup truck to write letters with a pencil by building a giant robot holding the truck in the air with a pencil taped to the windshield via a broomstick. #claude
The Claude Code leak is a delight.
Of course, Anthropic is requesting (with legal actions) developers to remove the copies or the clones publicly available online. Because AI companies are taking copyright issues very seriously as everyone knows.
It reveals how all that stuff is wobbly. Where is the Science in these glorified prompts? Where is the value in these companies? Training the model is, probably, but the prompts are hilarious.
DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:
➡️ @dair@peertube.dair-institute.org
They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at https://peertube.dair-institute.org/a/dair/videos
You can also follow their Mastodon account at @DAIR@dair-community.social
#FeaturedPeerTube #AI #LLM #LLMs #ArtificialIntelligence #PeerTube
A Publisher Pulled a Book for Suspected A.I. Use.
"The thing that ultimately convinced me that A.I. had had a hand in the text I was reading was a feeling: the sense, quite literally, of a lack of a person behind the words."
"AI is writing 90% of our code" sounds impressive before you realize that AI-generated code is orders of magnitude more verbose & less efficient than code written by a professional software engineer.
But "we ship 9 lines of fluff for each line of code that does something" doesn't sound as impressive.
"A growing body of evidence, drawn from leaked planning documents, academic research, and the testimony of intelligence professionals, suggests that the most consequential military operation of the twenty-first century may have been shaped less by strategic necessity than by a phenomenon researchers now call AI sycophancy — the tendency of large language models to tell their users exactly what they want to hear."
https://houseofsaud.com/iran-war-ai-psychosis-sycophancy-rlhf/
IFTTT wasn't a terrible idea. "turn off the lights when I'm more than 1 mile from home" isn't a bad automation. But #IFTTT failed, mostly because it just didn't work reliably. Coordinating the logins and apps was difficult. If you changed a password everything would break.
Why is it better to have an #LLM generate IFTTT task for you? I'm not just asking to be mean I really want to know.
We've done this. What did we learn from IFTTT?
Insightful video. Regardless of your stand on LLMs you will learn a lot from analyzing this vid.
The truth about LLMs
https://www.youtube.com/watch?v=Cn8HBj8QAbk
#LLM #AI #slop #enshittification #programming #large #language #model #technology #hidden #whistleblower #insightful
History is not just written.
It is selected.
Amplified.
Omitted.
Now we are training systems on it.
What gets carried forward?
https://knowprose.com/2026/03/llms-and-the-inheritance-of-knowledge/
#AI #LLM #EpistemicJustice #EpistemicInheritance #SignalVsNoise #DIDO
Setting up #OpenClaw with a screen reader is extremely annoying, so I put together a simple script to manage an isolated Docker container with persistent assets mounted on the host. It's configured to work with Discord and OpenAI Responses API to accommodate various engins and models. It also includes a working Chromium browser, MarkItDown, and few other tools for agents to use inside the container! I'm currently running with Qwen3.5-35B locally! #LLM #AI #Accessibility https://github.com/chigkim/easyclaw
Just a gentle reminder that the "If I don't club baby seals, someone else will club them"-style argument isn't an argument.
(Re: a conversation I had with a friend last night, not intended as a #vaguetoot against anyone on here)
RE: https://mastodon.social/@danluu/116317069604398190
Yet another example of generative AI becoming a fantastic tool for not only increasing the speed and efficiency but also finding new ways of enshittifying our world.
#copilot #enshittification #LLM
Interesting to see Copilot injecting ads into PR descriptions. Although there are a handful of older instances of this, if GitHub search is working properly, it looks like this started happening at scale around 10 days ago with more than 1k injections of this particular ad per day since then
What will they think of next?
“A IA precisa do seu corpo. A IA não consegue tocar na grama. Você consegue. Receba quando os agentes de IA precisarem de alguém no mundo real”
-slogan do site RentAHuman
https://outraspalavras.net/terraeantropoceno/data-centers-gulosos-e-beberroes/
Grammarly quietly made an #AI to sell bad writing advice using famous writers' names. They quickly had to backtrack as soon as people found out.
This gamble reflects a broader trend in the #tech industry: everyone is shipping features as quickly as an #LLM can write lines of code, with no way to spot problems until something breaks or someone sues them.
Vibe prototyping replaced thinking through things. But without direction, moving faster is worthless.
I really think it is time for us to treat LLM usage as another form of metadata, such as licensing.
As a user, I want to know if my software contains LLM.
As a developper I want to know if a project accepts LLM usage.
As a web-surfer I want to know if this content has been made by a human.
I don't think we can trust people (especially companies) to disclose their usage, so it's essentially a web-of-trust/web-of-shame.
Is any RFC already up? I wanna talk about this.
Uncomfortable questions..
- To what extent is #FOSS complicit to the rise of #BigTech?
- To what extent is FOSS complicit to disruptive #AI craze we face today?
- To what extent are vibe coding #LLM even possible without FOSS?
"BUT.. BUT.. The License!"
- To what extent does slapping on a license free us from responsibility, knowing that it hardly offers protection from abuse?
- To what extent did FOSS too just introduce the tech and damn the externalities?
- To what extent is FOSS complicit to the current state of the world?
- To what extent is it enough to consider FOSS to be "imbibed by good morals and values" if we can't defend those?
| We are clear. Because our intentions are good.: | 5 |
| We are clear. We just code. Bad actors abuse it: | 7 |
| We must find better ways to protect our work.: | 40 |
| Other (please comment): | 6 |
Closed
RE: https://mastodon.online/@mastodonmigration/116312883173526888
There we go!
I feel like i keep reposting this every week or so..
Bit by bit #Bsky is sliding towards just another clone of #Twitter and #Facebook
Actions speak louder then words
The #Fediverse remains the only true open source, self-hosted world wide community driven by the people
Going #AI (#LLM) is a CHOICE, they again chose wrong
How About Some AI With Your Bluesky?
A tale of two social networks.
Last week some enterprising Mastodon account was discovered to be scraping posts to feed to an AI for the purpose of helping people navigate the Fediverse. The response was swift. The alarm went out. The account was widely blocked and shunned.
Yesterday to great fanfare #Bluesky announced, as a new corporate feature, all posts would be scraped and an AI would now help users navigate the ATmosphere.
https://techcrunch.com/2026/03/28/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds/
I've been using a digital camera for many years and as a result have a lot of photographs.
How many is a lot?
$ ls -1R Pictures/ | wc -l
53190
Yeah, lots.
Despite having spent lots of time trying to create meaningful directory names it's still not easy to always find a photo I'm looking for.
What would actually be a USEFUL tool for AI would be something that I could run locally which could examine each of my photos and build some kind of free-text database of their contents which I can then grep.
But as far as I can tell nothing along those lines exists. Why have AI tools spent so much time trying to create faked photos and not producing something actually valuable?
Boost plz!
Looking for critical scholarship on the use of "AI" by library/archive workers. University libraries in particular, but adjacent and tangentially-relevant-at-best stuff is welcome too. Any format is fine: books, papers, blogposts, whatever. If it's good, gimme all you've got!
Looks like we're gonna have a department-wide conversation about people using LLMs, and it's being framed as "we're all using it, but we're not talking about it, so let's make sure we're all on the same page about using it responsibly" ... I'll of course be pushing the "there's basically no way to use it responsibly" position, and I'd like to arm myself and others with some critical analyses of issues related to its use in library/archive spaces.
Skill up with this LLM and Agentic AI-focused bundle of eBooks! 🤖🆙
Got my performance review today.
Positive feedback: literally every member of my team says I'm the best manager they have ever had. I solved multiple long-standing problems the team has been dealing with for years. Team members feel safe to share their struggles, everyone feels empowered, everyone receives valuable feedback.
Negative feedback: I am not enthusiastic enough about AI.
Overall ranking: 3/5.
Anyone hiring for a fully remote team lead?
@dvl Ah, then I misunderstood your toot — English is not my first language.
#Slopware is a term for software made with use of LLM(s). And I'm trying to avoid possible decline in quality of such software — while it may be without critical bugs in the next releases, I'm afraid of stupidly stupid bugs which will broke my habits and routines in the future. Because I already saw decline in quality like this with Windows 11 updates, or with Cloudflare outages, caused by "AI" "programming".
But the main problem for me lies not in the future bugs — people could make dangerous bugs too. The much more important thing for me — the intent of programmer who made some changes. If programmer make something fully with it's own mind — then I presume that this programmer had some fun from making things and I fully understand this intent: "have fun and make something useful meanwhile".
But, if programmer used #LLM while making a commits to #opensource program — I become extremely suspicious — why it was done this way? Programmer doesn't care about modified software? Or programmer is a "passer-by commiter" and don't want to look into the codebase at a necessary level? Or mentality "move fast and break things" deeply rooted in his mind? All of these are precursors for badly made software with bad UI and UX, as I think.
Obviously I'm a big fan of "move slow and make things" mentality and don't like programs made with "make fast and break things" mentality, which is the main reason to use LLMs in programming, as I see (e.g. in corporate world)
Bernie vs. Claude, https://youtu.be/h3AtWdeu_G0.
An awesome, short video (9mn), where Bernie Sanders is asking Claude about how AI and data privacy violation is a threat to democracy. Claude is surprisingly honest and lucid about all the problems.
It’s a great checkmate. Must see.
It seems hard to escape the AI virus. It's also infecting the open source world…
https://codeberg.org/small-hack/open-slopware
#FOSS #OpenSource #tech #technology #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #editor #app #apps #tools #software #linux #FreeSoftware #free #BigTech
Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game
This looks so terrible and Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look uncanny and sloppier.
They're advertising this as the next gen DLSS. This is their image of what the future of gaming should be. A massive fuck to every artist in the industry.
https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/
Empresa produz #ia #llm para monitorar o sorriso de garçons. Ou seja se vc não sorri muito = #desemprego .
Agora atendentes de restaurante tem que ficar com a mesma cara do #coringa de Jack Nicolson.
https://odysee.com/@rossmanngroup:a/this-company-built-ai-to-enforce:0
If instance admins allow AI Agents on their platform and they keep harassing us I have no other choice then to silence that instance
Again, I do not pay these massive costs each month to host robots
Let's keep the #Fediverse human shall we?
https://podcastindex.org/podcast/6951694?episode=51753787815
Episódio fundamental do #podcast de @samadeu@mastodon.social a ser compartilhado sobretudo com quem ainda não estiver ciente desses conceitos.
Sinopse:
Gabriel de Moraes e Sérgio Amadeu da Silveira analisam os novos acordos entre #OpenAI e #Anthropic com o United States Department of Defense, discutindo o avanço da inteligência artificial no setor militar. No debate, eles também comentam a declaração do presidente do Serviço Federal de Processamento de Dados (#Serpro) sobre soberania digital e infraestrutura tecnológica no #Brasil. Quais são os riscos da militarização da IA? O que isso significa para a soberania tecnológica? [...]🌐 https://tecnopolitica.blog.br/#podcast
Para assinar o podcast em qualquer agregador (como #AntennaPod): https://anchor.fm/s/f8204060/podcast/rss
#ética #NoAI #LLM #soberaniaDigital #Serpro #nuvem #BigTech #Google #AWS #Microsoft #Oracle #Palantir #CloudAct #governo #tecnopolítica #softwarePrivativo #tecnologia #computação #administraçãoPública #UFABC
Maluco, tem firma que prevê gastar 200 mil dólares por ano por funcionário com créditos dr #LLM! Imagina que louco se em vez de enfiar essa grana no Iate† de um único bilionário da #IA, eles aumentassem o salário dos seus trabalhadores, ou contratassem mais gente pra ser mais eficiente?
Quanto ~engenheiro de software~ ganha 200kUSD por ano?
† você *sabe* que a palavra que eu queria colocar ali não era iate, né?
RE: https://mastodon.social/@_elena/116210085518302030
In addition to the people already mentioned by @ele below, I highly recommend the following as well, for some critical counter views & research related to contemporary AI and its impacts on politics, climate, energy, education, arts...
@alineblankertz
@anaiscrosby
@asrg
@bildoperationen
@danmcquillan
@gerrymcgovern
@Iris
@JulianOliver
@olivia
@rostro
@thomasfricke
@w0bb1t
(Ps. I write about these topics too semi-regularly, but it's not sole focus of this account...)
#AI #NoAI #CriticalAI #LLM #FollowFriday
Dear Fedi friends,
I'd like to put together a list of people who are publicly resisting / calling out LLMs and AI slop.
Why? I enjoy reading my Fediverse feed in topical lists and I need something to counteract the unrelenting AI hype I see in the media.
Do you have any recommendations?
So far, at the top of my list I have:
@timnitGebru @emilymbender and @alexhanna of @DAIR
plus @cwebber @jaredwhite and @tante
Anyone else to recommend who advocates for #NoAI?
Didn't read the news for a week (bc I was returned to office and prefer to sleep more) and reading it now:
— Vim became a LLM slop
— ntfy is a LLM slop now
— systemd is a LLM slop too
What a time to be asleep^Walive
Looks like my passion to the old and simple solutions made a good thing for me. Time to throw the fuck away the ntfy from my server and use SMTP or XMPP for sending alerts to me.
P.S. Hope, the #Emacs itself willn't become a LLM slop oneday. Replacing it will not be so easy as with ntfy replacement.
@_elena especially for the environment but also other aspects connected to that: @gerrymcgovern who also wrote this book: https://gerrymcgovern.com/books/99th-day/
Another day, another company is reducing IT and software dev jobs to replace them with AI despite many report indicating that Gen AI doesn't work as promised. Get ready for more outages for Jira and co ;) ‘Devastating blow’: Atlassian lays off 1,600 workers ahead of AI push https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx
I didn't think Jira could get any more messed up, but I underestimated their commitment to the bit. AI is a bold choice for a platform that struggles with basic navigation. Lmao
Eu geralmente tenho a mesma sensação com duas situações:
- alunos que dizem que detectores de LLM marcam o texto deles como gerados por máquina
- professores que dizem que sempre detectam que um texto foi gerado por LLM.
Não que eu não identifique textos gerados, mas eu nunca me sinto tão confiante assim na minha detecção e acabo dando o benefício da dúvida.
Especialmente enquanto não temos uma solução institucional para isso.
Um casal de desgraç.. , quero dizer empreendedores, lançaram uma empresa para substituir o suporte ao consumidor por #IA .