buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
«KI macht kaputt — Ein Dutzend Beispiele dafür, wie der KI-Hype alles erstickt. Beflügelt von KI?
Wir erleben gerade die größte technologische Disruption aller Zeiten – und wenn nicht die größte, dann sicherlich die schnellste. Enorme Produktivitätssteigerungen, wissenschaftliche Durchbrüche. Sie kennen das Narrativ.»
Gibt es über die KI auch positive Artikel die fachlich und keine Werbungen sind?
🤔 https://www.computerwoche.de/article/4137800/ki-macht-kaputt.html
#kikritik #kihype #ki #kritik #hype #aihype #aicriticism #aislop #tech
One thing I thought #LLMs were good for was translation. Apparently #Gemini and others aren’t that great at that either.
#Wikipedia restricted contributors from a nonprofit called the Open Knowledge Association (#OKA) after editors discovered #AI-assisted translations added factual errors and incorrect citations.
As predicted, humans will be relegated to cleaning up the mess LLMs leave behind, for salaries far below the value of full-time employment to do the job properly.
[…] Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer‑review mechanisms.
https://www.404media.co/ai-translations-are-adding-hallucinations-to-wikipedia-articles/
“Don't let companies dictate the future.”
This is now my motto. Thanks for putting it so succinctly.
#enshittification #aihype #corporatism #BankerLivesMatter #SinglePayerOrGuillotine #AI
Karen Hao: "There’s a really dark history around attempts to quantify human intelligence. There’s basically never been any endeavor to quantify or rank human intelligence without some kind of insidious motivation behind it. So in general, yeah, this entire idea of recreating human intelligence is actually quite fraught. And also, one of the challenges that we’re facing now is, the AI industry has become so resource-rich that most of the AI researchers in the world now are bankrolled by the companies that are ultimately trying to just sell us their technologies.
And there has become this distortion in the fundamental science that is coming out of these researchers in terms of understanding the capabilities and limitations of AI today in the same way that you would imagine climate science would be deeply distorted if most climate scientists were bankrolled by the fossil fuel industry. You would just not get an accurate picture on the actual climate crisis.
And so, we are not actually getting an accurate picture on the capabilities of these systems and all of the different ways that they break down, because a lot of these companies now censor that kind of research or don’t even allow that research to be resourced. So there’s never any investigation along those lines."
"AI doomerism" und AGI-Horrorgeschichten lenken von der berechtigten und dringend notwendigen Kritik am aktuellen AI-Hype ab. AI-Startups verwenden die Horrorgeschichten paradoxerweise sogar als Werbung. Man muss also höllisch aufpassen, welcher Kritik man sich anschließt.
Sehr gut auseinanderklamüsert bekommen das @emilymbender und @alex in ihrem extrem wichtigen Buch The AI Con, von dem es leider noch keine deutsche Übersetzung gibt.
Im Deutschen empfehle ich das exzellente schmale Reclambändchen von @RainerMuehlhoff: Künstliche Intelligenz und der neue Faschismus.
Wer lieber zuhört, kann sich ein großartiges Gespräch anhören, das @kattascha mit Prof. Mühlhoff für ihren Podcast geführt hat: https://www.denkangebot.org/allgemein/rainer-muehlhoff-ueber-ki-und-autoritaere-sehnsuechte-im-silicon-valley/ – Audio-Download: https://www.denkangebot.org/podlove/file/52/s/feed/c/mp3/DA046-1.mp3
LLMs Don’t.
Model Collapse Ends AI Hype. George D. Montañez, PhD.
LLMs Don’t Think: They process tokens via statistical patterns, lacking internal states or understanding
LLMs Don’t Reason: They exploit superficial cues and rationalize answers post-hoc, failing at adaptive problem-solving
LLMs Don’t Create: They recycle and degrade existing information, unable to escape the "syntax trap" (manipulating symbols without semantic grounding)
https://yewtu.be/ShusuVq32hc or on the #nerdreic’s attention farm https://youtu.be/ShusuVq32hc
Also, "generational gap"? Are you going to tell us next that millennials eat too much avocado toast?
[…] at least 70% of the AI hype that’s bottom-up — eg, excited users — is people who never learned how to actually use their computer suddenly realizing the computer can do things for them
https://bsky.app/profile/amyhoy.bsky.social/post/3la6iaxohs22x
A Requiem for My Dignity, Sacrificed at the Altar of AI, Sightless Scribbles https://sightlessscribbles.com/posts/20250902/ #AI #AiHype
Oh dear. I accidentally commented on a Hacker News story in a way that was somewhat critical of vibe coding. https://news.ycombinator.com/item?id=46421599
@firefoxwebdevs It's not only about trust. It's also the question how Firefox wants to be part of destroying our climate and water ressources in a time of #climateEmergency and growing #desertification thanks to #datacenters needed by the #AIHype!
Software that contributes to this destruction, even though it could work without it, is not an option for me. If it forces me to use such functions, I consider it even criminal. Firefox/the CEO wants AI.
One of my favorite audiobook publishers, Podium Audio, hinted at over and over and over again, that they would love to eventually get into the AI narration space and I don’t have any words of anger anymore. I just hate this whole timeline. Just fucking stop it. You look pothetic even mentioning AI narrators in an interview about your company. Just stop it. Can we start bullying companies because these CEOs are just pothetic. https://www.youtube.com/watch?si=9uAB4mRceo0vK8pO&v=pfZNAIsPZR4&feature=youtu.be #AI #AIHype
I'm all for challenging received wisdom and assumptions, and I think magical thinking can be quite good if practiced well. But this is something else. To me this feels like the celebration of a dangerous kind of induced ignorance.
#AI #GenAI #GenerativeAI #AIHype #delusions #MagicalThinking
The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)
If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)
If you stick an LLM in the communication part, then you're putting yourself on the Retraction Watch list, not communicating.
#science #LLM #AI #GenAI #GenerativeAI #AIHype #hype
(1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.
(2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.
(3) Things are so dire that very few even seem to have the thought that this is something you should try to do.
(4) Yes, you might discover something while watching the LLM glop, but that's you, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.
🎬🎥🍿 Video of my keynote at MathPsych2024 now available online https://www.youtube.com/watch?v=WrwNPVTjJpo
#CogSci #CriticalAI #AIhype #AGI #PsychSci #PhilSci 🧪 https://youtu.be/WrwNPVTjJpo?feature=shared
The redefinist fallacy occurs when, instead of working through a question to find a real answer, we simply define one possibility to be the answer.Something to think about whenever someone tells you that ChatGPT is better than human beings at some task or another.
Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated
A new analysis estimates that over half of longer English-language posts on LinkedIn are AI-generated, indicating the platform’s embrace of AI tools has been a success.
I'm not going to link it. It should be easy enough to find if you must.
"indicating the platform's embrace of AI tools has largely polluted the information ecosystem there" is how it ought to read. "Success" is a wholly inappropriate word to use.