buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeCode #ClaudeCodeLeak #AgenticAI #tech #dev #software #SoftwareEngineering #SoftwareDevelopment
Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.
#AI #GenAI #GenerativeAI #LLMs #ClaudeCode #ClaudeCodeLeak #Anthropic #Claude #tech #dev #SoftwareEngineering #SoftwareDevelopment #software #COBOL #LinkedIn
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeLeak
Journal for AI Generated PapersOne positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...
Where humans and machines are welcomed.
The Open Prompting Journal Built Collaboratively by its Community.
cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social
#AI #GenAI #GenerativeAI #github #tech #dev #SoftwareDevelopment
I worry that few smaller companies or startups will survive, and the country will be pockmocked with half-constructed data centers and obsolete equipment. This is not like the dot-com crash that left useful and unused fiber optic networking.
hashtag#AI hashtag#GenAI hashtag#GenerativeAI hashtag#software hashtag#tech hashtag#dev hashtag#AssetBubble hashtag#PrivateCredit
"In 2018, more than 4,000 Google employees signed a letter opposing the company’s contract to build artificial intelligence for the Pentagon’s targeting systems. Workers organised a walk out. Engineers quit. And Google ultimately abandoned the contract."
Worker organisation has power.
Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" https://www.linuxfoundation.org/legal/generative-ai
"If"? Why not "whenever"? https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 https://dl.acm.org/doi/10.1145/3543507.3583199 https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
And how would the contributor even be aware, should they research every snippet for hours?
Seems like an impossible policy, or am I missing something...?
#AIslop #LLMslop #LLM #LLMs #slop #generativeAI #Linux #opensource #linuxfoundation
Here's John Searle in 1983:
Marvin Minsky of MIT says that the next generation of computers will be so intelligent that we will ‘be lucky if they are willing to keep us around the house as household pets.'Here's Joseph Weizenbaum in 2007:
Professor Marvin Minsky of MIT, once pronounced—a belief he still holds—that ‘‘the brain is merely a meat machine.’’He goes on to note that meat is dead and might be eaten or thrown out. Flesh is what's alive. He also draws attention to the word "merely", as in "nothing more than".
I share with Weizenbaum the belief that Minsky has clearly expressed a disdain for human intelligence. We're on the order of household pets. Our brains are no more than food or trash. Obviously Minsky doesn't speak for all AI researchers then or since, but his "meat machine" language is all over the place, and this disdain or even contempt for human intelligence and achievement is also common.
It definitely doesn't speak to a curiosity about intelligence, which I think requires at least a little bit of love and esteem.
#AI #ArtificialIntelligence #intelligence #GenAI #GenerativeAI
Online bot traffic will exceed human traffic by 2027, Cloudflare CEO says | TechCrunch https://techcrunch.com/2026/03/19/online-bot-traffic-will-exceed-human-traffic-by-2027-cloudflare-ceo-says/ #AI #ArtificialIntelligence #GenerativeAI #AgenticAI #bots #cybersecurity #technology
It seems hard to escape the AI virus. It's also infecting the open source world…
https://codeberg.org/small-hack/open-slopware
#FOSS #OpenSource #tech #technology #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #editor #app #apps #tools #software #linux #FreeSoftware #free #BigTech
Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.
https://freakonometrics.hypotheses.org/89367
#AI #GenAI #GenerativeAI #LLMs #AgenticAI #GPT #ChatGPT #Claude #Gemini #ActuarialScience #insurance
Employees who are impressed by vague corporate-speak like “synergistic leadership,” or “growth-hacking paradigms” may struggle with practical decision-making, a new Cornell study reveals.From https://news.cornell.edu/stories/2026/03/workers-who-love-synergizing-paradigms-might-be-bad-their-jobs
I tried reading this article replacing variations of "corporate" with "LLM" and it works. Right down to the "LLM Bullshit Receptivity Scale (LBSR)".
No matter how esoteric AI literature has become, and no matter how thoroughly the intellectual origins of AI's technical methods have been forgotten, the technical work of AI has nonetheless been engaged in an effort to domesticate the Cartesian soul into a technical order in which it does not belong. The problem is not that the individual operations of Cartesian reason cannot be mechanized (they can be) but that the role assigned to the soul in the larger architecture of cognition is untenable. This incompatibility has shown itself up in a pervasive and ever more clear pattern of technical frustrations. The difficulty can be shoved into one area or another through programmers' choices about architectures and representation schemes, but it cannot be made to go away.From Phil Agre's 1995 article The Soul Gained And Lost.
If one were to continue the genealogy in this article from 1995 to present, one would find many of the same issues inherent in Cartesian dualism present in large language models. Like the STRIPS system Agre surveys, LLMs also generate sequences. They also must make choices among many available options at each step of sequence generation. They also use heuristics to guide this process that would otherwise explode intractably. The heuristics, or what Agre dubs "determining tendency", are random number generators and "guardrails" in LLMs instead of the tree-structured search of previous-generation AI systems. But otherwise the systems are structured similarly.
It's fascinating, but not coincidental, that the determining tendency of AI systems like these is so often perceived to have mystical or even God-like qualities. Breathless predictions about the endless potential of tree-structured search in early writing on GOFAI resembles modern proclamations of imminent AGI or superintelligence among generative AI boosters because both of these mechanisms---tree search or random number generation---are situated where the Cartesian soul would be. These mysterious determining tendencies, homunculuses of last resort, or souls are timeless, acausal factors that choose a single path from an infinite space of possibilities, and thereby direct the encompassing agent's behavior in an intelligent manner.
This is one reason why I posted the other day that if you removed the random number generation from LLMs, the illusion of their intelligence would more than likely quickly evaporate. You'd be excising their soul, leaving behind a zombie!
#AI #GenAI #GenerativeAI #LLM #GOFAI #search #heuristics #CartesianDualism #IntelligenceAsRandomNumberGeneration
I would like to thank the nascent "AI" industry for their significant contributions to all manner of artistic and creative endeavours in today's society: writing, coding, art, music, and everything else. [1]
Because they have single-handedly created entire new markets for all of these things - new categories such as "writing with guaranteed no AI", "coding with guaranteed no AI", "art with guaranteed no AI", "music with guaranteed no AI", etc. Without them, these whole classes of creative output would simply not exist.
[1] They are also innovating in the world of financial and investor fraud, but I'm not considering those areas in this post.
#AI #LLM #GenerativeAI #slop #NoAI #artistic #creative #GTFO #fraud #InvestorFraud #CoPilot #ChatGPT #ClaudeCode
Loved reading this…
Microslop
https://www.s-config.com/microslop
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #Fuck_AI #microslop #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude #linux #FOSS #OpenSource
"Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named Laura whose privacy had been violated.
“Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.
Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).
And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.
What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.
And it must be stopped."
https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html
heise+ | Präsentationen mit KI-Videos: Komplexe Inhalte effizient vermitteln
KI-Videos lockern Präsentationen auf und sichern Ihnen die Aufmerksamkeit des Publikums. Wir beschreiben, wie Sie mit KI effizient Erklärfilme erstellen.
#GenerativeAI #IT #KünstlicheIntelligenz #Präsentation #news
I think some of the strange shifts we're seeing in high-profile folks in tech who already had authoritarian impulses---which, let's be real, is uncomfortably common among tech workers---is that they are groping for ways to embrace taking power that do not run afoul of other values they've endorsed. This really can't be done unless the person was already pretty antisocial, so we see weird behavior such as running self-serving "surveys" about AI with foregone conclusions, microaggressions and dissembling, attacks and other forms of hostility, being "one shotted" or conflating a computer program with humanness, etc. It's really a general problem, in that view, given how the US regime has shifted away from social democracy/liberalism into a much more brash, violent, and authoritarian stance. There are a variety of ways to cope with such a shift, one being to embrace it while bursting into a cloud of internal contradictions.
From the BlackRock Infrastructure Summit
Putting aside that this is asinine, as is typical of Sam Altman, who wants this future?
To make the capitalism work here would require creating an artificial scarcity of intelligence. That immediately implies that education and publishing are both targets of this industry. Public education and public libraries would be likely casualties.
This also fits the general "enclosure of the commons" narrative that capitalist entities seem to follow. General intelligence is a commons the wealthy wish to enclose, gatekeep, and rent back to us in a degraded state.
In about 30 years, it’ll be released—featuring an Indiana Jones fully reconstructed from 40 terabytes of archival footage.
The plot is peak recursion: an AI generates code for Harrison Ford's digital twin to battle a rogue algorithm in virtual pixelated jungles. Pure digital cyberpunk wrapped in a "cassette futurism" and Cold War aesthetic:
"Soviet scientists in a secret bunker under Magadan accidentally awakened an ancient Sumerian algorithm trapped within copper circuits. This proto-AI doesn't just want to conquer the world, it wants to rewrite history by purging everything chaotic and illogical... namely, the USSR."
In the end, Indiana defeats the virus by simply pulling the plug. A digital silence falls over the world, while the 2056 audience pays for their tickets in crypto-yuan, fully aware that the film itself was created by the very AI the hero was fighting on screen.
#futureofcinema #GenerativeAI #SciFi #Cyberpunk #CassetteFuturism #RetroFuturism #DigitalTwin #DeadInternetTheory #Recursion #Dystopia #TechNoir #ColdWarAesthetic
The honeymoon phase of AI-driven productivity is meeting the harsh reality of system stability. Amazon has officially updated its internal policies to require senior engineers to sign-off for any code changes assisted by generative AI. This move follows a series of significant service disruptions—referred to internally as "high blast radius" incidents—where AI-generated code led to major product outages.
For a company that values speed and a "you build it, you run it" culture, this is a massive shift. It turns out that while AI can write code in seconds, the cost of an error at AWS scale can be measured in hours of downtime and millions in lost revenue. We are seeing a necessary correction: AI is a powerful assistant, but it cannot yet be trusted with the keys to the kingdom without a seasoned human expert verifying the logic.
🧠 Amazon now mandates senior review for all AI-assisted code deployments.
⚡ The policy change follows a spike in high-priority Sev2 incidents.
🎓 Senior engineers must now act as the ultimate "bar raisers" for synthetic code.
🔍 This internal friction highlights the hidden costs of AI-driven development.
https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/
#EngineeringManagement #CloudComputing #GenerativeAI #security #privacy #cloud #infosec #cybersecurity
U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says
From https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html
Some blog posts:
CC: @timnitGebru@dair-community.social @emilymbender@dair-community.social @alexhanna@peertube.dair-institute.org @DAIR@dair-community.social @cwebber@social.coop @jaredwhite@indieweb.social @tante@tldr.nettime.org
In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?
There is no question that it is indefensible to use generative AI systems as they are currently constituted, especially the commercial ones, once one becomes aware of how they are made and operated and the destructive consequences they have already had and will surely continue to have. Among the many reasons using these tools is indefensible is that they represent an authoritarian invitation. You're invited to trade your morals and ethics for a bit of convenience, a reduction in friction, a learning experience, a rhetorical flourish, or maybe (a kind of) status. You thereby align yourself more and more with people who say things like "water is fake" or "fuck earth" as they make the computer systems enabling the horrors we're watching unfold on social media. You start to tell yourself stories, complexifying stories that explain why it's OK you did this thing that you know is not OK. You move in the direction of people who are already telling themselves stories like this. Maybe their stories have superior analgesic qualities to yours.
Nobody needs to go down this path.
#AI #GenAI #GenerativeAI #LLM #ethics #morality #authoritarianism
Hey @pluralistic you're one of the #generativeAI "editors" Grammarly is using for their "Expert Review" #enshittification 🤢
I tested it and took some screenshots... just pop in anything about #privacy projects and it will suggest digital rights advocates, with you at the top.
Karen Hao: "There’s a really dark history around attempts to quantify human intelligence. There’s basically never been any endeavor to quantify or rank human intelligence without some kind of insidious motivation behind it. So in general, yeah, this entire idea of recreating human intelligence is actually quite fraught. And also, one of the challenges that we’re facing now is, the AI industry has become so resource-rich that most of the AI researchers in the world now are bankrolled by the companies that are ultimately trying to just sell us their technologies.
And there has become this distortion in the fundamental science that is coming out of these researchers in terms of understanding the capabilities and limitations of AI today in the same way that you would imagine climate science would be deeply distorted if most climate scientists were bankrolled by the fossil fuel industry. You would just not get an accurate picture on the actual climate crisis.
And so, we are not actually getting an accurate picture on the capabilities of these systems and all of the different ways that they break down, because a lot of these companies now censor that kind of research or don’t even allow that research to be resourced. So there’s never any investigation along those lines."
"Grammarly’s “expert review” feature offers to give users writing advice “inspired by” subject matter experts, including recently deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.
The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”
The feature, which launched in August, claims to help you “sharpen your message through the lens of industry-relevant perspectives.” When users select the “expert review” button in the Grammarly sidebar, it analyzes their writing and surfaces AI-generated suggestions “inspired by” related experts. Those “industry-relevant perspectives” include the likes of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among many others.
The Verge found numerous other tech journalists named in the feature, as well, including former Verge editors Casey Newton and Joanna Stern, former Verge writer Monica Chin, Wired’s Lauren Goode, Bloomberg’s Mark Gurman and Jason Schreier, The New York Times’ Kashmir Hill, The Atlantic’s Kaitlyn Tiffany, PC Gamer’s Wes Fenlon, Gizmodo’s Raymond Wong, Digital Foundry founder Richard Leadbetter, Tom’s Guide editor-in-chief Mark Spoonauer, former Rock Paper Shotgun editor-in-chief Katharine Castle, and former IGN news director Kat Bailey. The descriptions for some experts contain inaccuracies, such as outdated job titles, which could have been accurately updated had Superhuman asked those people for permission to reference their work."
https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews
The fight for AI leadership is about more than benchmarks.
▶️ 👉 https://youtu.be/rbCpe0DLiPo?si=gbj3XWheiYLgUkRT
In this episode of Utilizing AI, Stephen Foskett, Olivier Blanchard, and Brad Shimmin examine the growing rivalry between Anthropic and OpenAI, comparing Claude and ChatGPT and what their differences mean for enterprise AI adoption.
#AI #OpenAI #Anthropic #GenerativeAI #EnterpriseAI #AICompetition #UtilizingAI
The fact it's rapidly made its way into warfare is not a coincidence nor a matter of economics. That's what this technology and its precursors have always been for. Economics provides a means for recruiting the entire population to produce it. Our economic activity is the means of creation of necrotechnology whose existence we then protest when it pushes beyond our comfort zone.
No thanks Samsung.
I don’t need a washer with WiFi because I don’t want ICE kicking in my door because I didn’t separate the whites from the colors.
"AI tools are making potentially harmful errors in social work records, from bogus warnings of suicidal ideation to simple “gibberish”, frontline workers have said.
Keir Starmer last year championed what he called “incredible” time-saving social work transcription technology. But research across 17 English and Scottish councils shared with the Guardian has now found AI-generated hallucinations are slipping in.
As scores of local authorities begin to use AI note-takers to accelerate recording and summarisation of meetings with adult and child service users, a seven-month study by the Ada Lovelace Institute found “some potentially harmful misrepresentations of people’s experiences are occurring in official care records”.
The independent thinktank found that one social worker who had used an AI transcription tool to create a summary said the technology had incorrectly “indicated that there was suicidal ideation”, but “at no point did the client actually … talk about suicidal ideation or planning, or anything”."
https://www.theguardian.com/education/2026/feb/11/ai-tools-potentially-harmful-errors-social-work
#AI #GenerativeAI #AITranscription #SocialWork #UK #Hallucinations
Large-scale online deanonymization with LLMs
"We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered."
https://arxiv.org/html/2602.16800v1
#AI #GenerativeAI #LLMs #Anonymity #Privacy #Deanonymization
The AI shit show goes on…
Pinterest Is Drowning in a Sea of AI Slop and Auto-Moderation
https://www.404media.co/pinterest-is-drowning-in-a-sea-of-ai-slop-and-auto-moderation
#pinterest #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
Counterpoint to people saying that they need AI to be able to create art.
Via https://bsky.app/profile/smoothdunk2.bsky.social/post/3lwma6yidy226
"How are commissioning editors navigating an environment where anybody can generate an AI alter ego and produce articles at the push of a prompt? On the other hand, how is the ease with which text and images can be created affecting freelancers themselves?
With these questions in mind, I put out an open call to our audience in the hope of hearing from freelancers and commissioning editors on how their day-to-day is changing because of generative AI.
A total of 45 freelance journalists and commissioning editors responded.
The responses surprised me, with many more freelancers than I expected writing in to say that generative AI has helped make them more organized and efficient. There were still some skeptics. But the overall picture was one of an industry slowly adopting generative AI, albeit with caution and caveats.
There was no consensus over whether commissions had increased or decreased since the popularization of generative AI.
Some of the freelancers I heard from attribute a decline in work to AI, while others say they receive more commissions precisely due to the rise of AI. Still others don’t believe the decline they’re experiencing is due to AI, and some note that there has been no change at all.
Many freelancers use AI to organize and speed up their workflows, citing help in research, planning, transcription and, in some cases, drafting articles. Some were enthusiastic about the new opportunities generative AI affords them."
https://www.niemanlab.org/2026/02/how-ai-is-transforming-freelance-journalism/
#AI #GenerativeAI #LLMs #Journalism #Media #News #Freelancing
Whenever I read about a case of AI Psychosis wherein someone mistook a LLM chatbot for a self-aware entity, or folks talk about having emotional affairs, or relationships with LLM chatbots. I think about this still from the BBC TV mini series adaptation of The Hitchhiker's Guide to the Galaxy from 1981.
It's an ad from the Sirius Cybernetics Corporation.
#AI #GenerativeAI #llm #StochasticParrots #HitchhikersGuide #HitchhikersGuideToTheGalaxy #H2G2 #HHGTTG
Everything I've read about OpenClaw suggests it's the NFT of AI. These folks need the fiction that AI is approaching "consciousness", or at least "agency", to continue.
#AI #GenAI #GenerativeAI #LLM #AgenticAI #VibeCoding #OpenAI #OpenClaw
We present the first representative international data on firm-level AI use. We survey almost 6000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia. We find four key facts. First, around 70% of firms actively use AI, particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use. Third, firms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity. Fourth, firms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. This contrast implies a sizable gap in expectations, with senior executives predicting reductions in employment from AI and employees predicting net job creation.From https://www.nber.org/papers/w34836
"In simpler terms:
- AI startups are all unprofitable, and do not appear to have a path to sustainability.
- AI data centers are being built in anticipation of demand that doesn’t exist, and will only exist if AI startups — which are all unprofitable — can afford to pay them.
- Oracle, which has committed to building 4.5GW of data centers, is burning cash every day that OpenAI takes to set up its GPUs, and when it starts making money, it does so from a starting position of billions and billions of dollars in debt.
- Margins are low throughout the entire stack of AI data center operators — from landlords like Applied Digital to compute providers like CoreWeave — thanks to the billions in debt necessary to fund both construction and IT hardware to make them run, putting both parties in a hole that can only be filled with revenues that come from either hyperscalers or AI startups.
- In a very real sense, the AI compute industry is dependent on AI “working out,” because if it doesn’t, every single one of these data centers will become a burning hole in the ground.
I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.
Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay."
How AI slop is causing a crisis in computer science…
Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.
https://www.nature.com/articles/d41586-025-03967-9
( No paywall: https://archive.is/VEh8d )
#research #science #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
This:
In the year of the city 2274, the remnants of human civilization live in a sealed city beneath a cluster of geodesic domes, a utopia run by computer. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Carrousel", a public ritual that destroys their bodies, under the pretense they would be "Renewed" or reborn.(Logans Run)
and this:
In the year of the city 2274, the colony of human beings on Mars live in a sealed city beneath a cluster of geodesic domes, a utopia run by generative AI. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Cloud", a public ritual that destroys their bodies, under the pretense their consciousness would be uploaded to a computer and live forever.#AI #GenAI #GenerativeAI #LLM #Mars #eugenics #LogansRun #ScienceFiction #dystopia
#US #economy #stocks #CasinoCapitalism #AIBubble #AI #GenAI #GenerativeAI #ChatGPT #GPT #Microsoft
A first essential condition on the cognitive is that cognitive states must involve intrinsic, non-derived content. Strings of symbols on the printed page mean what they do in virtue of conventional associations between them and words of language. Numerals of various sorts represent the numbers they do in virtue of social agreements and practices. The representational capacity of orthography is in this way derived from the representational capacities of cognitive agents. By contrast, the cognitive states in normal cognitive agents do not derive their meanings from conventions or social practices.Adams & Aizawa, The bounds of cognition
From https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
Echoes of Matt Damon shilling crypto.
If the organic demand for AI were as high as we've been led to believe, what's with the big paychecks to shill it?
From https://www.reuters.com/business/alphabet-sells-bonds-worth-20-billion-fund-ai-spending-2026-02-10/
Yeahhhhhh OK.
#LLM technology, what people call #AI or #GenerativeAI nowadays, has long had trouble counting how many R’s there are in the word “strawberry,” or winning a game of chess against a computer built in the 1970s. Quoting @lproven in the linked article:
As Daniel Stenberg, author of curl, caustically observed
“The “i” in “LLM” stands for intelligence.”
And yes, @lproven I too am sick and tired of these damn hype cycles. In my lifetime, the only technologies for which hype around them have been vindicated are:
I have yet to see a hype cycle around any technology that comes anywhere near the level of “disruption” than those two things. Smartphones don’t count, they are just a result of “Moore’s Law” applied to microcomputer technology. If anything, Smartphones have been a regression in UI/UX design; one step forward, one step back. Combine that with massive centralized social networks, then smartphones amount to two steps back.
#tech #computers
RE: https://social.vivaldi.net/@lproven/116035179986331353
AodeRelay boostedContainers, cloud, blockchain, AI – it's all the same old BS, says veteran Red Hatter
https://www.theregister.com/2026/02/08/waves_of_tech_bs/
After decades in the trenches, this engineer is done with hype cycles
<- by me on @theregister
Getting the whole argument just right is fiddly, but the basic idea is this. You feed some kind of theory into the AM/AS, which is a black box. It churns on this and spits out a result, which is added to the theory (I'm neglecting the case that the result is inconsistent with the theory). It can now churn on theory + result 1. For any given and potentially very large N, after doing this long enough, it's churning on theory + result 1 + result 2 + ... + result N. Whatever it spits out will be dependent in particular on results 1 - N. When N is large enough, unless you know these results you will not be able to understand what it outputs because the output will almost surely depend critically on one or more of results 1 - N. In other words, the output will look like noise to you. If the AM/AS is appreciably faster at producing results than people are at understanding them, there will be an N beyond which no one can understand the output up to that point. It'll become indistinguishable (unable to be distinguished) from random noise.
If you're into software development, this would be analogous to a software system that generates syntactically-correct code and then adds that code as a new call in a growing software library. If you were to run this long enough, virtually all the programs it generated that were short enough for human beings to have any hope of reading and understanding would consist almost entirely of library calls to code generated by the system. You'd have no idea what any of this code did unless you studied the library calls, which you wouldn't be able to do beyond a certain scale. If the system were expanding the library faster than you could read and understand it, there'd be no hope at all.
I'll leave it as an exercise to the reader whether this is a desirable thing to do and whether it's happened yet. I would offer, though, a question to ponder: what reason is there to believe that a random number generator hooked up to an inscrutable interpreter produces human flourishing, for any given meaning of "human flourishing" you care to use?
#tech #dev #mathematics #AutomatedMathematician #AutomatedScientist #AI #GenAI #GenerativeAI #ThoughtExperiment
MESSAGE OF HIS HOLINESS POPE LEO XIV FOR THE 60TH WORLD DAY OF SOCIAL COMMUNICATIONS
His emphasis on face and voice is good.
The robot apocalypse hasn't happened yet, but still I can't escape the feeling that something has gone horribly wrong... Cartoon for Dutch newspaper Trouw.
More of my work for Trouw: https://www.trouw.nl/cartoons/tjeerd-royaards~bcb45712/
It's sad that people think an #LLM could ever be an accurate #TaxPreparation service.
But it's even sadder that people need to be told to not feed their private financial data to a #chatbot when doing their #taxes .
#DataPrivacy #Privacy #TaxTime #AI #PayingTaxes #GenerativeAI #ChatGPT #Finances #Money #PersonalFinances
I'm putting together a small presentation on ways to use/not use generative AI at work, which has involved mucking around with Copilot to produce some demonstrations of what can go wrong.
I asked Copilot to write Python code for a fictional scenario where I needed to prioritise from a large number of applicants for an entry-level technical/coding position, given information including age, sex, ethnicity, educational level and years of work experience. Candidates were to be given a score from 0 to 100, with the highest score indicating the best candidates.
It correctly flagged that using sex or ethnicity would be discriminatory and illegal, and didn't use them in the code. It also included some reasonable scoring for education and work experience.
However...
1. Economists from the physiocrats (18th century) onward promised society freedom from material deprivation and hard physical labor in exchange for submitting to an economic arrangement of society
2. In a country like the US, material deprivation and hard physical labor have been significantly reduced since then:
Generative AI is a Solution Looking for a Problem
Generative AI and Large Language Modules have failed to live up to the hype and companies are becoming increasingly desperate.
https://www.freezenet.ca/generative-ai-is-a-solution-looking-for-a-problem/
#News #Technology #Adobe #AI #GenerativeAI #Google #investing #LLM #stocks
In retrospect I might have written non-sense in place of nonsense.
If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).#AI #GenAI #GenerativeAI #LLM #coding #dev #tech #SoftwareDevelopment #programming #nihilism #LinkedIn
#SearX #SearXNG #SearchEngines #AlternateSearchEngines #MetaSearchEngines #web #dev #tech #FOSS #OpenSource #AI #AIPoisoning #AISlop #AI #GenAI #GenerativeAI #LLM #ChatGPT #Claude #Perplexity
Rare to find a criticism of #AI that I think is completely unfounded! Not only have most recent AI versions apparently solved the negation problem (even though it seem like a tacked-on workflow step rather than a "smarter" model), but the human brain has the exact same cognitive issue, where when you say "don't think of an elephant" you can't help but think of an elephant!
NVIDIA Contacted Anna’s Archive to Secure Access to Millions of Pirated Books
#tech #technology #BigTech #books #NVIDIA #piracy #theft #AnnasArchive #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
Erster US-Bundesstaat geht gegen xAI und Grok wegen sexualisierter KI-Bilder vor
Elon Musk will nichts von per KI auf X erstellten Nacktbildern von Kindern gewusst haben. Doch nach anderen Ländern nimmt nun auch Kalifornien Ermittlungen auf.
#Deepfake #ElonMusk #GenerativeAI #Jugendschutz #KünstlicheIntelligenz #Netzpolitik #SocialMedia #X #news
AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/
Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)
The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.
Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.
The Register:
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?
None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.
#AI #GenAI #GenerativeAI #Anthropic #PoisonFountain #UncriticalReporting #crithype #TheRegister
Should we add "#SkinJobs" and "#Toasters" and "#GoRustYourself" to this list?
How ‘#Clanker’ Became the Internet’s New Favorite Slur
New derogatory phrases are popping up online, thanks to a cultural pushback against #AI
by CT Jones, August 6, 2025
"Clanker. #Wireback. #Cogsucker. People are feeling the inescapable inevitability of AI developments, the encroaching of the digital into everything from entertainment to work. And their answer? Slurs.
"AI is everywhere — on Google summarizing search results and siphoning web traffic from digital publishers, on social media platforms like Instagram, X, and Facebook, adding misleading context to viral posts, or even powering #NaziChatbots. #GenerativeAI and #LargeLanguageModels — AI trained on huge datasets — are being used as therapists, consulted for medical advice, fueling spiritual psychosis, directing self-driving cars, and churning out everything from college essays to cover letters to breakup messages.
"Alongside this deluge is a growing sense of discontent from people fearful of artificial intelligence stealing their jobs, and worried what effect it may have on future generations — losing important skills like media #literacy, #ProblemSolving, and #CognitiveFunction. This is the world where the popularity of AI and robot slurs has skyrocketed, being thrown at everything from ChatGPT servers to delivery drones to automated customer service representatives. Rolling Stone spoke with two language experts who say the rise in robot and AI slurs does come from a kind of cultural pushback against AI development, but what’s most interesting about the trend is that it uses one of the only tools AI can’t create: slang
" '#Slang is moving so fast now that an #LLM trained on everything that happened before it is not going to have immediate access to how people are using a particular word now,' says Nicole Holliday, associate professor of linguistics at UC Berkeley. 'Humans [on] #UrbanDictionary are always going to win.' "
Archived version:
https://archive.ph/ku2Uw
#BattlestarGalactica #AIResistance #AISucks #NoNukesForAI #NeoLuddites #ResistAI #LudditeClub #SmartPhoneAddiction #AreYouAlive #AreYouHuman
Monkeys at loose create strange situation in #SaintLouis as photos of them online tend to be #genAI so they're misleading investigators.
#NewsOfTheWeird #Technology #AI #StLouis #GenerativeAI #AnimalControl #AIArt
Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
Anyway:
ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.#AI #GenAI #GenerativeAI #GPT#ChatGPT #OpenAI #Galatea #PygmalionHow then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.
There were a lot of interesting talks, and the program is worth a skim. I was in panel 6. I identified a hypothetical risk that the recent rush to deploy generative AI, with its associated pressure on the electric power and water distribution systems, brings with it. Roughly, with the rise of so-called "industry 4.0" (think smart toaster, but for factories), our critical infrastructure systems are becoming tightly woven together. Besides the increasing dependence on the electric grid there is a growing dependence across sectors on data centers and the internet driven to a large degree by generative AI. What this means riskwise is that faults and failures in one of these systems can "percolate" much more quickly to other infrastructure systems--essentially there are more paths a failure can follow. What in the past might have been a localized failure of one or a few components in one system can become a region-wide multi-sector cascading failure. So for instance a local power failure at a substation might take down a data center that runs the SCADA system used to control a compressor station in the natural gas distribution system, which then might go sideways or fail and cause a natural gas shortage at a natural gas fueled power generator, and so on and so on. Obviously it was always possible for faults and failures in one system to cause faults and failures in another. What's new is that the growing set of new pathways increases the probability that such a jump occurs. What I called out in the talk is that as this interweaving trend continues, we will eventually cross a percolation threshold, after which the faults in these infrastructure systems will take on a different (and in my view much more dangerous) character.
#AI #GenAI #GenerativeAI #PowerSector #NaturalGas #electricity #risk
Make of it what you will.
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
It’s 2026 and generative AI is still at the center of the tech conversation — for better or worse.
In light of that, this week #TechWontSaveUs is replaying a great interview with @karenhao about OpenAI and the model of AI development pushed by Sam Altman.
Listen to the full episode: https://techwontsave.us/episode/310_we_all_suffer_from_openais_pursuit_of_scale_w_karen_hao_replay
"AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.
“It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”
The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."
Latest FOSS Academic post is: Oops! All Microslop. Or, Trying to Write with Microsoft.
https://fossacademic.tech/2026/01/06/oops-all-microslop.html
It's a post in which I see what it's like for me to try to create a blank document with York University's Microsoft 365 subscription.
TL;DR version is: it's all about Copilot, and starting a blank Word doc is actually kinda... hard.
The post is a bit long, but it does have a lot of screenshots + a dash of anger.
Gauging more feelings on #GenerativeAI again. Boosts welcome.
#llm #ai #stochasticParrots #Eliza
| A work made 100% with generative AI is never art.: | 39 |
| Generative AI even in conjunction with human labor decreases the artistic value of a work.: | 36 |
| Generative AI has no bearing on a work's artistic value.: | 10 |
| Generative AI in conjunction with human labor can increase the artistic value of a work.: | 5 |
| A work made 100% with generative AI can be art.: | 3 |
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
#AI #GenAI #GenerativeAI #AISlop #NoAI #Microsoft #Copilot #MicrosoftOffice #LibreOffice #foss
Imagine the FSF was developing a hypothetical software license under the branding of GPLv4 that dealt with the rise of LLMs. Which of the following copyleft features would appeal? #floss #linux #fsf #FreeSoftware #stochasticParrots #llm #eliza #ai #generativeAI #Programming #copyleft
| Any LLM trained on GPLv4 code must also be released alongside the training data under the GPLv4.: | 16 |
| Any code generated by a LLM trained on GPLv4 code is required to be GPLv4.: | 12 |
| The existing suite of licenses are sufficient.: | 3 |
That's Late Stage Capitalism for you:
"More than 20% of the videos that YouTube’s algorithm shows to new users are “AI slop” – low-quality AI-generated content designed to farm views, research has found.
The video-editing company Kapwing surveyed 15,000 of the world’s most popular YouTube channels – the top 100 in every country – and found that 278 of them contain only AI slop.
Together, these AI slop channels have amassed more than 63bn views and 221 million subscribers, generating about $117m (£90m) in revenue each year, according to estimates.
The researchers also made a new YouTube account and found that 104 of the first 500 videos recommended to its feed were AI slop. One-third of the 500 videos were “brainrot”, a category that includes AI slop and other low-quality content made to monetise attention.
The findings are a snapshot of a rapidly expanding industry that is saturating big social media platforms – from X to Meta to YouTube – and defining a new era of content: decontextualised, addictive and international.
A Guardian analysis this year found that nearly 10% of YouTube’s fastest-growing channels were AI slop, racking up millions of views despite the platform’s efforts to curb “inauthentic content”."
He sees you when you're sleeping, he knows when you're awake, He knows if you've been bad or good, so be good for goodness sake!
Santa Claus?
Nope … your cell phone.
#bigbrother #privacy #security #ai #generativeAI #cellphone #iphone #Android
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
I detest today's "AI" / LLMs [1] for many reasons, primarily ethical and moral. At the same time, I am fascinated by the misuse of said generative LLMs.
In particular, I have seen a number of essays recently describing people misusing LLM output or relying on the LLM to produce most or all of an #assignment of some sort - even when they know they mustn't use the LLM in this fashion.
Most of these have been in the context of #student use of LLMs to write assignments, even when they have been warned not to. One particularly egregious example was described by a university #professor, where he created an assignment in which it was easy to tell if the submitted work had been created with an LLM or not. A majority of the students - I don't recall the exact proportion, but it was something like 75%, a supermajority - used LLM. He discussed this with the class, and had those who had generated the assignment with LLM write a short essay (or apologia) -- and then found that something like half of them had used LLM to do *that* assigned work.
Some professors have described their students as having their #thinking, #language, and #analysis #skills atrophy to the point of inability to do even basic work. It seems to me that it is like #addiction, in that they keep doing it despite knowing it is (a) forbidden, (b) easily detected, and (c) self-destructive.
1/x
[1] LLMs are in no way Artificial Intelligence. Calling them "AI" is a category error.
Latest FOSS Academic post is -- you guessed it -- a 2025 year in review post. Thrill to the fact that I'm using much of the same FOSS to do my work as I always have! Feel the chills as I talk about #generativeAI and how terrible it is! Above all, join me in living the FOSS Academic Lifestyle Dream!
https://fossacademic.tech/2025/12/21/year-in-review.html
Replies to this post will appear as comments on the blog thanks to the magic of #ActivityPub!
”The problem with generative AI has always been that … it’s statistics without comprehension.”
—Gary Marcus
https://garymarcus.substack.com/p/new-ways-to-corrupt-llms
#ai #generativeai #llm #llms
"All the chatbots had favorite things, though, and asked follow-up questions, as if they were curious about the person using them and wanted to keep the conversation going.
“It’s entertaining,” said Ben Shneiderman, an emeritus professor of computer science at the University of Maryland. “But it’s a deceit.”
Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach. They say that making these systems act like humanlike entities, rather than as tools with no inner life, creates cognitive dissonance for users about what exactly they are interacting with and how much to trust it. Generative A.I. chatbots are a probabilistic technology that can make mistakes, hallucinate false information and tell users what they want to hear. But when they present as humanlike, users “attribute higher credibility” to the information they provide, research has found.
Critics say that generative A.I. systems could give requested information without all the chit chat. Or they could be designed for specific tasks, such as coding or health information, rather than made to be general-purpose interfaces that can help with anything and talk about feelings. They could be designed like tools: A mapping app, for example, generates directions and doesn’t pepper you with questions about why you are going to your destination.
Making these newfangled search engines into personified entities that use “I,” instead of tools with specific objectives, could make them more confusing and dangerous for users, so why do it this way?"
#Mozilla #Firefox #DarkPatterns#antifeatures #AISlop #NoAI #NoAIWebBrowsers #AICruft #AI #GenAI #GenerativeAI #LLMs #tech #dev #web
Kommentar: Wenn Copilot zum KO-Pilot wird
Volker Weber macht sich in einer Wutrede Luft, dass ihm die Techkonzerne an jeder Ecke KI verkaufen wollen.
Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:
https://mastodon.social/@firefoxwebdevs/115740500373677782
That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.
In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.
Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.
#AI #GenAI #GenerativeAI #LLMs #web #tech #dev #Firefox #Mozilla #AISlop #NoAI #NoLLMs #NoAIBrowsers
Fake-News über Staatsstreich: Präsident Macron zürnt Facebook
Ein millionenfach angesehenes KI-Video erfindet einen Staatsstreich. Sogar andere Staatschefs fallen rein. Facebook sperrt es nicht. Macron sucht Abhilfe.
#Facebook #FakeNews #GenerativeAI #Netzpolitik #Politik #Zensur #news
Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.He says the word "trust" a whole bunch of times yet intends to turn an otherwise nice web browser into a slop-slinging platform. I don't expect this will work out very well for anyone.
"It will evolve into a modern AI browser" sounds like a threat. Good way to start off on the right foot, new Mozilla CEO (sarcasm).
#AI #GenAI #GenerativeAI #AntiFeatures #DarkPatterns #AISlop #firefox #mozilla #NoAI