buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Also, "generational gap"? Are you going to tell us next that millennials eat too much avocado toast?
I little tonic for the #aihype
I'm Sorry to Burst Your Bubble: You Are Being Fooled About AI, and You Will Soon Feel Really Stupid
https://substack.com/home/post/p-187213882
"Open" LLM models are almost never Open Source.
They are "Open Weights." This means the company allows you to run the model, but you have no right to see how it was made or what data it was trained on.
We need to stop letting companies redefine "Open Source" to mean "available for download." Words have meanings.
If a company claims they are open source but plans to "someday" release the code, or hides the training data, they are a poser. This is "Open Washing." They are co-opting the term to earn the community's goodwill for marketing without actually respecting the four freedoms.
#NoAI #FOSS #OpenSource #GNULinux #GNU #Linux #Privacy #OpenWashing #OpenWeights #OpenWashing #NotOpenSource #TechEthics #FreeSoftware #Fediverse #AIHype #Enshittification #DigitalSovereignty
[…] at least 70% of the AI hype that’s bottom-up — eg, excited users — is people who never learned how to actually use their computer suddenly realizing the computer can do things for them
https://bsky.app/profile/amyhoy.bsky.social/post/3la6iaxohs22x
A chatbot entirely powered by humans, not artificial intelligence? This Chilean community shows why
A Requiem for My Dignity, Sacrificed at the Altar of AI, Sightless Scribbles https://sightlessscribbles.com/posts/20250902/ #AI #AiHype
Oh dear. I accidentally commented on a Hacker News story in a way that was somewhat critical of vibe coding. https://news.ycombinator.com/item?id=46421599
AI is not artificial intelligence. What we encounter is a very very very very very good AUTOCOMPLETE.
@firefoxwebdevs It's not only about trust. It's also the question how Firefox wants to be part of destroying our climate and water ressources in a time of #climateEmergency and growing #desertification thanks to #datacenters needed by the #AIHype!
Software that contributes to this destruction, even though it could work without it, is not an option for me. If it forces me to use such functions, I consider it even criminal. Firefox/the CEO wants AI.
One of my favorite audiobook publishers, Podium Audio, hinted at over and over and over again, that they would love to eventually get into the AI narration space and I don’t have any words of anger anymore. I just hate this whole timeline. Just fucking stop it. You look pothetic even mentioning AI narrators in an interview about your company. Just stop it. Can we start bullying companies because these CEOs are just pothetic. https://www.youtube.com/watch?si=9uAB4mRceo0vK8pO&v=pfZNAIsPZR4&feature=youtu.be #AI #AIHype
RE: https://mstdn.social/@hkrn/115731381418227758
I had to stop and doublecheck that this wasn't an @theonion article.
So the path to Tech Bro runs through Westminster now, but only one Brit can be one at any given time.
LLM critics should start talking about the psychological harms LLM’s are doing to creatives. I have a writer friend. She's a really fantastic writer. I don't know what happened, but she started becoming very insecure about her own writing, and started asking LLM’s like Claude and such to revamp her writing. Help her brush up on the stuff she was lacking. Her writing isn't amateurish at all. It really isn't. Her descriptions are clear. She does a lot of telling when it comes to actions but I actually honestly get tired of writers showing everything all the time. I truly think telling can work better in some cases.
But what started happening was she started spiraling. She's losing confidence in her own writing. When she'd attend writing group with me, she said she doesn’t have anything this week because she has writers block. She sends me forwarded emails of *other* people saying that the LLM writing is better because it prints stuff like, the moon hung in the sky like a polished fingernail. She would write something such as the moon glistened in the pitch darkness. Give or take. That was just off the top of my head.
But she's losing her own confidence. She truly believes LLM’s are a better writer than her because they produce, well, sentences trained on billions and billions of text with fine tuned metaphors, etc. She feeds everything into an LLM. She hates her own writing now when she legit wasn't a bad writer in the first place.
I suggested she write something for herself for a while. Don't publish it. I also suggested writing some Fanfiction for fun. This is helping some but it isn't fully curing her of her newfound insecurity of her own writing, especially when she reads those emails that say things like oh the LLM did a better job at X Y and Z.
It honestly is heartbreaking to watch. I don't know how we can combat this, but for one, I wish readers would honestly quit saying something is badly written. Your bad writing is someone else’s masterpiece.
Another thing I wish readers would do is maybe understand that writing changes with our mood. Someone's voice may change after a terrible event. Someone may want to try a new voice. Someone may revert back to an old voice. None is worse than the other.
And honestly, the longer I edit people's works, the more I come to realize that bad writing simply doesn't exist. There's just different writing styles and styles you hate. That's it.
#AI #AIHype
"The #AI hype machine relentlessly promotes the idea that we’re on the verge of creating something as intelligent as humans, or even “superintelligence” that will dwarf our own cognitive capacities.
But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build."
Best critique of AI on the internet without buying into the hype https://www.youtube.com/watch?v=TtVJ4JDM7eM #AI #AIHype
Stop overhyping AI, scientists tell von der Leyen @euractiv
「 These are marketing statements driven by profit-motive and ideology rather than empirical evidence and formal proof 」
「 The scientific development of any potentially useful AI is not served by amplifying the unscientific marketing claims of US tech firms 」
https://www.euractiv.com/news/stop-overhyping-ai-scientists-tell-von-der-leyen/
New, by me: Fission in Troubled Waters: The Familiar Story of LLM Hype
The hype around LLMs has been discussed at length around the Fediverse. I started thinking about how much history rhymes, and realized this is no exception.
The most obvious parallel from an economic perspective is the dotcom era, but thinking from a more societal perspective I realized that it's also an echo of an earlier tech boom: nuclear fission. That's the topic of my latest post on The Security Economist.
https://www.securityeconomist.com/fission-in-troubled-waters/
Is there a database of wrong answers of ChatGPT that could be used to give my students the experience that AI sometimes makes stuff up?
Bonus point for wrong facts related to environment/natural sciences/biology.
When they get a wrong answer, the resulting discussion could be a very productive opportunity to learn critical thinking in modern times.
The situation is dire and we really need to counter the #AIHype before it is too late. My colleagues are split into a small group of AI-brainwormed (unfortunately we are almost forced to attend seminars with titles like "How AI can improve your teaching") and many totally desperate professors that do not know how to deal with the situation.
@VaryIngweion
@capita_picat
#TeachingInTheAgeOfAI #Education #HigherEd #SciComm #AcademicChatter
"In 2024 alone, private U.S. investment in artificial intelligence reached roughly $109 billion, according to Stanford’s AI Index Report 2025, and major firms are now committing hundreds of billions more to AI infrastructure and data centers in 2025. For comparison, researchers estimate that ending homelessness nationwide would cost on the order of $9–30 billion a year; clearing the public-transit repair backlog would require about $140 billion; and cancelling student debt for millions of Americans could cost anywhere from $300 billion (for a $10,000 per-borrower plan) to over $870 billion. These are not impossible sums; they are simply directed elsewhere. The problem is not that we invest in technology — it’s that we do so to avoid investing in one another. We could rebuild public transit, deliver universal healthcare, cancel student debt, end homelessness — all projects of collective possibility — but instead we feed the circus. We reward those who promise salvation through algorithms while punishing those who simply demand dignity through policy. And if AI spend is today’s growth engine, it is also today’s concentration risk — a stimulus routed through a handful of firms, data centers, and supply chains.
The defenders of this frenzy will tell us to look forward — to think of the productivity gains that will surely justify today’s excess. It is, they insist, short-sighted to critique the future before it arrives. But history is not a ledger that balances itself. Productivity gains are not moral gains; they are distributed, like everything else, according to power. The Industrial Revolution doubled output but also deepened inequality. The internet democratized information yet concentrated ownership. Efficiency rose; bargaining power fell."
I'm all for challenging received wisdom and assumptions, and I think magical thinking can be quite good if practiced well. But this is something else. To me this feels like the celebration of a dangerous kind of induced ignorance.
#AI #GenAI #GenerativeAI #AIHype #delusions #MagicalThinking
The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)
If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)
If you stick an LLM in the communication part, then you're putting yourself on the Retraction Watch list, not communicating.
#science #LLM #AI #GenAI #GenerativeAI #AIHype #hype
(1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.
(2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.
(3) Things are so dire that very few even seem to have the thought that this is something you should try to do.
(4) Yes, you might discover something while watching the LLM glop, but that's you, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.
🎬🎥🍿 Video of my keynote at MathPsych2024 now available online https://www.youtube.com/watch?v=WrwNPVTjJpo
#CogSci #CriticalAI #AIhype #AGI #PsychSci #PhilSci 🧪 https://youtu.be/WrwNPVTjJpo?feature=shared
The redefinist fallacy occurs when, instead of working through a question to find a real answer, we simply define one possibility to be the answer.Something to think about whenever someone tells you that ChatGPT is better than human beings at some task or another.
Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated
A new analysis estimates that over half of longer English-language posts on LinkedIn are AI-generated, indicating the platform’s embrace of AI tools has been a success.
I'm not going to link it. It should be easy enough to find if you must.
"indicating the platform's embrace of AI tools has largely polluted the information ecosystem there" is how it ought to read. "Success" is a wholly inappropriate word to use.