buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
@nielsa no, that's not what I'm telling you.
I prefer to believe that most people will be thoughtful.
"… a huge number of bugs. I have so many bugs in the Linux kernel that I can't report because I haven't validated them yet. I'm not going to make some open source developer validate bugs that I haven't checked yet. I'm not going to send them potential slop … I now have … several hundred crashes that they haven't seen because I haven't had time to check them. We need to find a way to fix this …"
– Nicholas Carlini
Nicholas Carlini - Black-hat LLMs | [un]prompted 2026
<https://www.youtube.com/watch?v=1sd26pWhfmg> (3rd March)
― essential viewing for anyone with an interest in cybersecurity or infosec.
@dch thanks for the encouragement.
A few more links in the comment that's pinned under <https://redd.it/1sapr8a>, but Carlini's half-hour presentation is a must.
@maxleibman We can't put it back in the bottle, but we can flush it down the toilet.
FreeBSD's position on the use of AI-generated code?
<https://www.reddit.com/r/freebsd/comments/1sbzf3q/freebsds_position_on_the_use_of_aigenerated_code/> – asked a few minutes ago, currently pinned (a community highlight).
@dch @allanjude I made a pinned comment with reference to two of your recent posts. If you can think of better alternative links, let me know. Thanks.
cc @stefano
Jeez. This Claude code leak. Sloppy sloppy slop.
> https://cyberpunk.gay/notes/akjr3ydangf7000m
The fact that this unbelievably shitty slop leaked is basically a crisis for every single Claude slopper (major global company), but one can assume all other GPT derivative comparable products are exactly this. Sheesh, and you wonder why they suck. Jeez Louise. #ai #llms #cybersecurity #programming #leak #sourceCode #zeroDay
#ElonMusk has made a particularly bold demand of his #WallStreet advisers ahead of the #IPO of #SpaceX.
#Musk is requiring banks, law firms, auditors & other advisers working on the IPO to buy subscriptions to #Grok, his #AI #chatbot, which is part of SpaceX, acc/to 4 people with knowledge….
Some of the banks have agreed to spend tens of millions on the chatbot, & they have already started integrating Grok into their #IT systems….
#tech #business #law #SEC #antitrust
https://www.nytimes.com/2026/04/03/business/spacex-ipo-grok-elon-musk.html?smid=nytcore-ios-share
Ich bin gerade dabei auf #Linux umzusteigen, und habe mir meinen eigenen Home Server eingerichtet. Jetzt kann ich endlich meine Fotos in der privaten Cloud mit #immich katalogisieren. Aber eine Sache, die mir richtig gute gefällt und die ich nicht auf dem Schirm hatte, ist die Möglichkeit ein lokales #AI Modell für Suche nach Bildern zu nutzen. Es macht echt Spaß mit Texteingabe Bilder zu finden, die ich vergessen (oder verdrängt) hatte.
Und jetzt frage ich mich, ginge sowas nicht auch mit #Email? Ich habe mal nach #Thunderbird plugins gesucht, und da gibt es welche mit #ollama Support - aber nur für Schreiben, Übersetzen und Zusammenfassen einzelner Emails. Da kann ich sie auch gleich selber alle durchlesen... Ich will "Sitzung bei XY mit Thema ABC" eingeben können und Treffer erhalten, die diese bestimmten Wörter nicht direkt benutzen - und das alles mit voller Kontrolle über die digitale #Privatsphäre. Das wäre mal wirklich hilfreich
...Those delays, it seems, are due to a key bottleneck: electrical components manufactured abroad. Batteries, electrical transformers, and circuit breakers all make up less than 10 percent of the cost to construct one data center, but as Andrew Likens, energy and infrastructure lead at Crusoe’s told Bloomberg, it’s impossible to build new data centers without them...
Poetry in architecture.
https://futurism.com/science-energy/data-centers-construction-supply
I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.
#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #microslop #microsoft #copilot #meta #google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.
Any monkey with a keyboard can write code. Writing code has never been hard. People have been churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.
What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.
Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.
So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.
So it should come as no surprise that one of the hardest things in development is understanding someone else’s code let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.
It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.
They might as well call vibe coding duct-tape-driven development or technical debt as a service.
🤷♂️
The bozos renamed Office 365 to "Microsoft 365 Copilot" and force fed the whole thing to all its users, and now they're touting widespread AI adoption. 😂 🤡
RE: https://mas.to/@brianvastag/116337992229203151
What's it called when you falsely elevate a stock price to personallly profit?
What's it called when a group of companies or people join in together to do the same?
🤔
What's it called when a majority of the economy is wrapped up with this plan?
AodeRelay boostedGood news!
Data centers are being cancelled.
https://futurism.com/science-energy/data-centers-construction-supply
The Guardian | Google to tap into gas plant for AI datacenter in sharp turn from climate goals by Dara Kerr
Texas power plant would emit 4.5m tons of carbon dioxide per year, more than that of the entire city of San Francisco
Google has struck a partnership for a natural gas power plant that could provide energy for one of its datacenters in Texas, unearthed by new research and confirmed by the company. The move is part of an ongoing about-face for the tech giant, which once pledged to be carbon neutral by 2030 and has long been seen as a pioneer in clean energy.
The gas power plant is slated to be built in Armstrong county, a sparsely populated area in the Texas panhandle. According to a report by the research organization Cleanview, the project is being led by Crusoe Energy, which partnered with Google to develop the datacenter campus known as “Goodnight”, named after a nearby town.
Continue reading...
Read more: https://www.theguardian.com/technology/2026/apr/02/google-ai-datacenter
#ai(artificialintelligence) #climatecrisis #google #datacenter #carbondioxide
Is #AI Good for #Democracy?
"All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create."
https://www.schneier.com/blog/archives/2026/02/is-ai-good-for-democracy.html
My latest column, on the false promise of LLMs and generative AI. I’m not afraid of machine intelligence. I’m afraid of human stupidity. We are taking what could be a useful tool, and abusing it to delude each other and ourselves. https://albertaviews.ca/ai-fever-dream/ #ableg #abpoli #cdnpoli #AI #yeg #Edmonton
OpenClaw has launched an official China mirror for ClawHub, providing a localized access point for users in China with improved stability and access speed. The mirror is built on ClawHub infrastructure with technical support from ByteDance. https://technode.com/2026/04/02/openclaw-launches-official-china-mirror-with-infrastructure-support-from-bytedance/ #China #Tech #AI #OpenClaw
Largest Dutch pension fund cuts ties with controversial tech firm Palantir
ABP, the Netherlands’ largest pension fund, has withdrawn from the controversial AI company Palantir, the Financieele Dagblad reported. Palantir is known to provide services to the American immigration service, ICE, and the Israeli army.
Six months ago, ABP still held €825 million in investments in Palantir. The pension fund for civil servants in the Netherlands has sold those shares, according to FD.
Palantir is globally renowned for its advanced AI data analysis software, which can combine colossal amounts of seemingly unrelated data such as online communications, DNA, fingerprints, financial transactions, travel records, contacts, and surveillance footage. The software is used by hundreds of intelligence and investigative services worldwide to track down all kinds of suspects. Amnesty International has warned multiple times that the use of Palintir’s software violates human rights.
🧵
#palantir #ABP #ICE #USA #uspol #AI #surveillance
https://nltimes.nl/2026/04/02/largest-dutch-pension-fund-cuts-ties-controversial-tech-firm-palantir
Shaw & Nave's "cognitive surrender" paper is an unpublished preprint. No peer review. No journal. Posted on SSRN in January. Minimal (none I could find) academic citations in three months.
What it does have: a Wharton podcast, Futurism coverage, a dozen Substacks, and a term that went viral.
A paper about people uncritically adopting AI outputs goes viral because people uncritically adopted its framing.
That's the whole story.
They gave 1,372 (good sample) people logic puzzles from the Cognitive Reflection Test, questions specifically designed so most people give the wrong answer on instinct (!). Then they embedded ChatGPT, rigged to sometimes give confident wrong answers. The wrong answers were the
*same intuitive errors the test was built to trigger*.
Calling this "System 3", a fundamental revision of Kahneman's cognitive architecture don't make it so. The #AI didn't override anyone's deliberation. It confirmed a bias the participants already had, on a test engineered to produce exactly that bias. That's automation bias.
We've had a name for it since 1996.
Not as sexy as "cognitive surrender" though.
👉Trust in AI predicts following AI. Higher IQ predicts overriding bad answers. Tautologies as moderation analyses.
👉20 cents per item + feedback nearly halved the effect. Some deep cognitive restructuring.
Moni. PEOPLE WANT MONIN FOR SMARTS
👉 The headline effect size is inflated by design, AI-Faulty pushes toward the answer people were already going to give (Super dodgy)
👉 No human-advisor control. Can't distinguish "people defer to AI" from "people defer to any confident source." The entire System 3 framing hangs on a comparison they didn't make.
The finding, people follow confident bad AI advice, is real. But that's automation bias lit, not a new cognitive architecture.
Computer says NO!
"Cognitive surrender" is a marketing term.
"System 3" is a brand extension.
Enormous vibes-to-citation ratio.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
TLDR; People boost this uncited preprint because catch title thats retreaded a 29yo "discovery" that folks trust machines.
"Meta’s Rogue #AI Agent Exposed Internal Data. Enterprise AI Security Has a Gap Problem.
An AI agent inside a well-resourced enterprise, operating in a controlled internal environment, caused a serious data exposure incident with no human directing it to do so."
Meta’s Rogue AI Agent Exposed Internal Data. Enterprise AI Security Has a Gap Problem.
https://api.cyfluencer.com/s/meta-s-rogue-ai-agent-exposed-internal-data-enterprise-ai-security-has-a-gap-problem-26380
The security vendors racing to build proprietary AI models are solving the wrong problem.
Models aren't the bottleneck. The platform is.
Six months from now, the models available to your team will be dramatically better than what exists today. The question is whether your security infrastructure can actually use them.
LimaCharlie doesn't compete in the model race. We built the platform so any model can access 100% of platform capabilities through the same APIs your analysts use.
That parity is what makes the next layer possible: automated agents, and eventually deployable AI SOCs that run as a collection of agents across thousands of tenants at once.
Register for our next AI SecOps Workshop, where we'll use Claude Code with LimaCharlie to deploy agents, build detections, and catch issues before they become incidents: https://limacharlie.wistia.com/live/events/oz9xzr9fk3?utm_campaign=ai+workshop+cloudops+4+2026&utm_source=mastodon&utm_medium=social
We adapted security governance to SaaS adoption and DevOps velocity. Vibe coding by non-developers is the next comparable shift, and those transitions give us a starting approach, even though the timeline is shorter.
Every time I actually fact checked these "WAAAH DATACENTRES" actual numbers, and I've done this three times already, this is what I found;
1. The numbers include estimates of future loads
2. The numbers purposefully conflate the #Ai #datacenters with the existing Datacentres, "the cloud" your #AWS , Facebook, Dropbox, Netflix, iCloud etc and yes your #Mastodon servers. Which at the last check is about 50/50 but it's hard to get real data.
3. The water consumption is very poorly presented often conflating water use with water destruction
4. The numbers often pick a comparison that sounds terrible but in the global scale is fuck all.
Like "AI uses as much water as the city of New York!!!!!!!!!!"
Having said that the trend of #broligarchs just slapping down methane emergency generators on the perimeter running 24/7 is just criminal.
TLDR; The "WAAAH DATACENTRES" "journalism" really depends on the personal bias of the author
Here's a fun post for pro- and anti-AI infosec people alike - guess who is going to have to "fix" AI? If you're thinking "not me" well, think again.
https://www.markloveless.net/blog/2026/4/2/the-uncomfortable-effects-of-ai
RE: https://infosec.exchange/@malick/116335760238491682
AI Just Hacked One Of The World's Most Secure Operating Systems – Forbes
Also <https://gnu.gl/@wtfismyip/116325256164232617> @wtfismyip
#FreeBSD #security #AI #Claude
AodeRelay boosted#Anthropics #Claude hat völlig autonom einen Root-Exploit für #FreeBSD gebaut. In exakt vier Stunden. Wir reden hier nicht von einem simplen "Schreib mir ein #Python-Skript"-Prompt, sondern von echtem, iterativem #hacking Das Modell hat die #Schwachstelle im Netzwerk-Login gefunden, sich selbständig ein Lab hochgezogen, den #Payload smart in mehrere Pakete gesplittet und den eigenen Code knallhart gedebuggt, wenn der erste Versuch gecrasht ist.
Der ganze Bericht unter
In the US-Iran war, data centers are a target. Iran has struck several Amazon facilities, including another one just this week in Bahrain.
On #TechWontSaveUs, I spoke with Sam Biddle of The Intercept to dig into why data centers are a target and how the military uses AI.
Listen to the full episode: https://techwontsave.us/episode/322_why_iran_is_attacking_data_centers_w_sam_biddle
#tech #iranwar #datacenters #amazon #iran #ai #artificialintelligence
I’m honored to be a panelist at CXO Inc.’s #CIOMeet event today. We’re discussing the evolving role of the CIO today and strategies for #AI and #cybersecurity.
#cio #chicago #technology #leadership #educator #mentorship #collaboration #ciomeet #futuristcio
Group Pushing #AgeVerification Requirements For #AI Sneakily Backed By #OpenAI
OpenAI hasn't been shy about spending money #lobbying for favorable laws and regulations. But when it comes to its involvement with #childSafety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for #policy changes.
The problem isn't that AI can clone open-source software at the click of a button. The problem is that we don't have an answer to the copyright, maintenance, and social questions that come with this power.
#AI #opensource
https://www.techspot.com/news/111904-ai-can-clone-open-source-software-minutes-problem.html
Four fuses. One week. All burning.
OpenAI raised $122B and still won't turn a profit until 2030. Agents are deleting inboxes and want root access to your machine. Your RAM is three times what it was because hyperscalers bought the supply chain out from under you. And nobody in the industry can negotiate with an oil shock.
Oh and the fix for the inbox problem makes the RAM problem worse. Of course it is.
https://blog.ppb1701.com/theyre-racing-to-stay-ahead-of-the-fuse
#ai #openai #bubble #bigtech #userhostile #blog #privacy #github
I made a fun little graphic
CC0 public domain no rights reserved, feel free to print it, modify it, etc. Make it less vulgar, more vulgar, whatever you want. Change the centre text to apply to your own work. Make a pro-AI version if you want, it's a great exercise in creativity and basic graphic design (:
The font is Texturina by Guillermo Torres (https://github.com/Omnibus-Type/Texturina), a free font released under the SIL Open Font License. Centre font is Texturina Bold, Subtitle font is Texturina Medium, Edge font is Texturina Medium and Medium Italic. It's a very simple design that's recreatable in any design tool (or even something like Powerpoint).
SVG: https://cxiao.net/files/ai/rawdogprogramming.svg
PNG: https://cxiao.net/files/ai/rawdogprogramming.png
Not a Tech Bro.
Excellent.
https://amiatechbro.com
#amIaTechBro #tech #democracy #autocracy #idiocracy #humanrights #workersrights #AI
DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:
➡️ @dair@peertube.dair-institute.org
They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at https://peertube.dair-institute.org/a/dair/videos
You can also follow their Mastodon account at @DAIR@dair-community.social
#FeaturedPeerTube #AI #LLM #LLMs #ArtificialIntelligence #PeerTube
RE: https://mamot.fr/@pluralistic/116334612940112223
"what's objectionable about Anthropic – and the AI sector – isn't copyright. The thing that makes these companies disgusting is their gleeful, fraudulent trumpeting about how their products will destroy the livelihoods of every kind of worker [...] And it's their economic fraud, the inflation of a bubble that will destroy the economy when it bursts [...] It's their enthusiastic deployment of AI tools for mass surveillance and mass killing" - @pluralistic
🚨 Keynote Speaker Alert! 🚨
We’re excited to welcome @HannahFoxwell, Co-founder of BIMP, to Global AppSec Vienna!
Her talk dives into AI-driven developer velocity, what works, what doesn’t, and how to stay secure at speed. Don’t miss it!
Claude Code's Source Leak Was Embarrassing. The Real Story Is What It Revealed
Anthropic's Claude Code source leak exposed far more than implementation details. It exposed roadmap, trust assumptions, and how brittle npm security has become.
You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeCode #ClaudeCodeLeak #AgenticAI #tech #dev #software #SoftwareEngineering #SoftwareDevelopment
A Publisher Pulled a Book for Suspected A.I. Use.
"The thing that ultimately convinced me that A.I. had had a hand in the text I was reading was a feeling: the sense, quite literally, of a lack of a person behind the words."
It’s International Fact-Checking Day! Refresh your AI identification skills and help to identify AI-generated false news.
OMFG this sales pitch is so INFURIATING
"need to market your product? how about starting a blog! No writing creativity time involved at all! Just say hey #AI make me a BLOG and it will you never even have to listen to it! Your customers will not know the difference! we spend nothing on education they grew up with flawed information nothing can be trusted so no one knows what is true anymore its all crap they're all crap your crap and your crap is crap."
«Anthropic — Claude-Code-Quelltext versehentlich öffentlich zugänglich:
Der Sourcecode von Claude Code ist durch eine Fehlkonfiguration im Netz gelandet - und zeigt, was Anthropic möglicherweise künftig plant.»
Ich kann darüber nur lachen, denn KI basiert auf geklaute Daten und wenn die dann selber beklaut werden, ja dann ist ihr Geheule gross.
#datenklau #anthropic #CloudCode #ai #ki #quelltext #sourcecode #itsicherheit
🧵 …ja, wenn KI's andere KI's ashorchrchen, ja dann geschieht dies:
«Nach Claude-Code-Leak: Anthropics Quellcode wurde bereits mehr als 8000 Mal kopiert:
Anthropics "Claude Code"-Code ist offenbar schon mehr als 8000 Mal auf Github aufgetaucht.»
#cloudcode #ki #sourcecod #datenklau #ai #quelltext #itsicherheit #datenschutz #github #git
🧵 …ach so intelligent ist also Anrhropic KI. Einfach mal den Code per Git in die Cloud stellen und wenn es populär entdeckt wird sich als "gehackt" kennzeichnen. Nun ja, so simpel "intelligent" geht es auch.
ENG: «Claude Code's Source Didn't Leak. It Was Already Public for Years.»
😅 https://www.afterpack.dev/blog/claude-code-source-leak
#cloudcode #ki #sourcecod #datenklau #ai #quelltext #itsicherheit #datenschutz #github #git #hack #hacking #copy #web #online #source
"AI is writing 90% of our code" sounds impressive before you realize that AI-generated code is orders of magnitude more verbose & less efficient than code written by a professional software engineer.
But "we ship 9 lines of fluff for each line of code that does something" doesn't sound as impressive.
It's that time of year again, and I'm canceling my last remaining @bitwarden
accounts to avoid being part of their plans to integrate agentic AI into their products 🤷♂️
> And remember! These people and companies in AI started destroying academia and #ethical work and oversight well before the release of ChatGPT.
This! People have warned about the harmful effects of #AI algorithms on our #society for _literally_ decades now:
RubyConf 2015 - Keynote: Consequences of an Insightful Algorithm by Carina C. Zona
https://www.youtube.com/watch?v=Vpr-xDmA2G4
Biased bots: Human prejudices sneak into AI systems (April 2017):
https://www.bath.ac.uk/announcements/biased-bots-human-prejudices-sneak-into-ai-systems/
1/2
Cloudflare blog阅读:《Introducing EmDash — the spiritual successor to WordPress that solves plugin security》
软件开发成本已大幅降低。我们最近利用人工智能Code Agent...我们的Agent一直在进行一项更具雄心的项目:从零开始重建 WordPress 开源项目。
WordPress网站96% 的安全问题都源于插件...它没有任何隔离机制...安装 WordPress 插件时,您实际上是将几乎所有权限都授予了它
EmDash 解决了这个问题。在 EmDash 中,每个插件都在其独立的沙箱...添加插件的人员可以精确地知道自己授予了插件哪些权限;
每个 EmDash 网站都内置了 x402 支持——对访问内容收费;
EmDash 与众不同:它专为在无服务器平台上运行而设计;
EmDash 由 Astro 提供支持;
每个 EmDash 实例都包含Agent Skills;
EmDash CLI使您的代理能够以编程方式与本地或远程 EmDash 实例进行交互;
EmDash 默认使用基于密钥的身份验证;
🔗: https://blog.cloudflare.com/emdash-wordpress
Deploy: https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare : https://github.com/emdash-cms/emdash/
Hacker News: https://news.ycombinator.com/item?id=47602832
AI Cronyism is on full display in DC now. If it weren't so dangerous to all of us, it would be outright fascinating how fast the idea that "states are the laboratories of democracy" disappears, once your party is in charge.
#AI #regulation
https://www.npr.org/2026/03/28/nx-s1-5755062/trump-wants-a-deadlocked-congress-to-move-on-ai-frustrated-states-say-they-already-have
Whoa - the #Delve’s series A pitch deck lists #micro1 as a customer - the same schmucks posting ghost jobs to populate their #AI #ATS platform that LinkedIn will not remove.
https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service-61d
#startup #startups #fraud #fakeCompliance #compliance #GRC #seriesA #Ycombinator #ycombinatorfunded
#ghostJobs
#linkedIn #stinkedIn #substack #substackSupportsFascists
#hiring
I Built a Complex RAG App Using Warp, the Agentic Development Environment 🤖🧠
🚨 EU AI Act goes FULL FORCE on August 2, 2026.
What's banned? What can cost €35M in fines?
✅ 8 prohibited AI practices
✅ High-risk systems: hiring, credit, healthcare, justice
✅ GPAI rules for ChatGPT, Gemini, Claude
✅ Business compliance checklist
Full guide: https://newsgroup.site/eu-ai-act-2026-what-banned-august-full-list/
Most folks are deluding themselves we are in a race for perfection.
Years of project management have taught me that "Good enough is good enough"
And if anyone doubts that, a good look around is a testimony that mediocrity surrounds us. Anyone working in any workplace for any length of time will testify to that.
TLDR; We don't need #Ai for mediocrity at scale
#Oracle cuts jobs across sales, engineering, security
Just so you know, if you use ChatGPT, it may search in Grokipedia, the disinformation Wikipedia clone made by a nazi.
An unnamed school in the UK used #AI to decide which books to ban from the library.
https://www.standard.co.uk/news/uk/uk-school-library-1984-ai-b1276535.html
#BookBans #Censorship #DefendResearch #Libraries
EDIT. I fixed a broken link. Thanks to @staringatclouds .
Update. Here's some detail on #AI tools to help conservative schools ban books.
https://www.404media.co/blockade-the-right-is-using-ai-content-scanners-to-try-to-supercharge-book-banning/
One is BLOCKADE, which stands for “Blocking Lustful Overzealous Content, Keeping Away Depravity and Extremism."
"The program’s script includes a list of roughly 300 words, each assigned a severity score that contributes to an overall appropriateness score…The script explicitly defines 'educational inappropriateness' as 'content offensive to conservative values,' while also asking the AI 'not to include any additional text or explanation' for its decisions."
At this point, these things are so expensive it's better to buy (or salvage) an old PC....
The Verge: These Raspberry Pi price hikes are no joke
Prices are going up by over $100 in some cases thanks to those AI fools.
https://www.theverge.com/gadgets/905189/raspberry-pi-price-increases-pi-4-3gb
“AI” exists to disenfranchise labor. That’s what it’s for. Regardless of how good these stochastic systems are or the flaws they have just being able to point at the non-unionized robot whenever the employees ask for raises or anything really is incredibly valuable for business. The existence of “AI” and the supporting narrative mean that you defending your value and therefore price on the market will have a way harder time.
But that’s the impact for people on the top. That’s why the C-Suite wants those tools to exist and to be deployed. I think there is also a strong effect on the way “AI” affects the social relationships between peers and coworkers.
Work is not really about finding friends. It happens but it’s not necessary. But there is – ideally – a level of respect for one another. The acknowledgement of each other’s expertise, skills and contributions as well as one’s weaknesses. All those allow the forming of communities based on solidarity: If the people touching computers realize that the workers in the warehouse are part of the same struggle all those groups together can actually develop some power to make their needs heard and acted upon. If for example a company manages to separate the different groups and departments, plots them against each other, this weakens everyone but management.
And I think “AI” has this massive effect on feeling connected with each other, in seeing each other as peers.
Let me tell you a story. A few weeks ago a colleague was talking about an issue he had with a specific piece of software that we are – as we usually do – using a bit outside of what it is intended for. He outlined the problem and then went into the things he tried to solve it culminating in an explanation of what probably will work. Another colleague I respect a lot then responded with “did you ask the AI?” and it felt like a scene from a movie where the protagonist is just sitting there, hearing something and suddenly a big dude comes in and slaps him in the face with a fish. I was irritated and it really took me a while to understand why.
It was not the absurdity of the statement (What even is “the AI”?) or the weird dynamic of asking a colleague who just presented his solution with the suggestion to use an unreliable search engine instead. It was me feeling drifting away from that person. Because that statement made it so obvious how every “here’s a problem” is now connected to “let’s ask the spicy autocomplete” for that person.
In the weeks after that whenever there was an issue with the stuff coming from that team, when something wasn’t up to par, or had a weird structure I realized how I kept attributing it to them leaning heavily on slop machines. Which isn’t necessarily true: Shit happens, software and hardware is complicated and sometimes things end up weird or hackish or broken – regardless of what tech you use.
But I realized that this event (and all the little similar events before) ate away from a trust relationship that had been developed over years. I was having sort of the perceived “workslop” experience.
“Workslop” is a term that defines bad, “AI” generated work product that someone produces to fulfill their work duties on a surface level that their coworkers will have to clean up: I generate a bunch of code that kinda works and someone else realizes that it’s a hot mess when trying to run it and has to clean up the mess. In that example I would have produced workslop (but might have been very efficient!).
The experience of workslop (whether it’s real or mostly perceived as I showed in my short story) directly erodes social connection, erodes trust in one another and in the end erodes solidarity. Because why would you stand with a person who does not do their job and offloads their work on you?
Pride in one’s work under capitalism is a bit of a weird thing: You can be proud of what you did but that great work will more often than not not be especially rewarded. But what it does is signal something to your coworkers: Doing good work means that you respect the people working with you, working off of what you did. It means that you try to make their lives as easy as possible because you’re all in the same boat.
“AI” pushes everyone towards slop. “Just do it. It’s easy. You look innovative. It’s fast. The others can also just use the slop machine to fix the mess if it occurs.” But the dissolving effects on the social fabric of the workplace cannot be underestimated: We are already working in conditions that make building the foundations of solidarity harder. Teams get pitted against each other based on KPIs, everyone is working from home, never having to meet their coworkers. As usual “AI” is just gasoline on the fire. But maybe we should start putting out some fires?
Liked it? Take a second to support tante on Patreon!
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
→ Adults Lose Skills to AI. Children Never Build Them.
https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them
“The real issue may not be that people are broadly becoming lazier or stupider (though this may still be the case). Instead, I argue that a distinct difference needs to be made clear depending on who is using the tool. What AI does to a 45-year-old is likely categorically different than what it does to a 14-year-old.”
And I'm always yelling at the stupid machine, and then i feel bad because there's probably a person listening and pushing numbers for me on the backend, because so much AI is actually automated human intervention to create an illusion of machine grace that rests upon underpaid laborers in poorly regulated economies #AI
I had a phone thing demand that i "use natural conversational language to describe the reason for" my call and seriously just let me push fucking numbers, that was fucking hell #AI
So Anthropic employees are using Claude Code to contribute AI-generated code to open source repositories and hiding the fact using their own internal “undercover mode”.
Totally trustworthy people.
(And open source project that at the very least requires disclosure of AI-authored contributions should immediately ban Anthropic employees on principle.)
Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.
#AI #GenAI #GenerativeAI #LLMs #ClaudeCode #ClaudeCodeLeak #Anthropic #Claude #tech #dev #SoftwareEngineering #SoftwareDevelopment #software #COBOL #LinkedIn
"While the Middle East burns and Americans pay more for food and gas, Trump and his billionaire friends, such as Larry Ellison, are profiting off of war, gobbling up our data, and going all-in on AI."
~ Wajahat Ali with Audrey Hanson
#Trump #Iran #war #EconomicElites #billionaires #AI #LarryEllison #profiteering
/1
https://thelefthook.substack.com/p/how-the-illegal-iran-war-is-profiting
❗️ Iran threatens Microsoft, Apple, Google & other top US companies
'For every assassination, one American company will be destroyed'
https://x.com/RT_com/status/2038996923701510488
#DonaldTrump #EpsteinClass #IsraelFirst
#StraitOfHormuz = #Iran #Sanctions
#news #socialmedia #uspol #mastodon #usa @politics @socialmedia #resistance #education #europe #middleeast #war #eu #humanrights #activism #protest #cdnpoli #canada #Palestine #Hezbollah #ai ##FreePalestine #Lebanon #economy @humanities #Cuba #tech @technology
🇮🇷 "The permanent passage of Israeli ships through the strait will be banned, for all of history, under our management."
https://x.com/DD_Geopolitics/status/2039309551531671918
#Iran #StraitOfHormuz #news #resistance #freepalestine
#DonaldTrump #EpsteinClass #IsraelFirst
#StraitOfHormuz = #Iran #Sanctions
#news #socialmedia #uspol #mastodon #usa @politics @socialmedia #resistance #education #europe #middleeast #war #eu #activism #protest #cdnpoli #canada #Palestine #Hezbollah #ai ##FreePalestine #Lebanon #economy #Cuba
Do you hate how Google's AI takes the work of journalists & summerizes it without linking to the websites anymore?
Now there's a Chrome extension you can get that stops #Google from using its own #ai for answers! It provides good ol' web links, like Sergey Brin, Larry Page & God intended.
It's called Bye-bye Google AI & was made by Avram Piltch last year. Here's the story, which also includes gives other ways to turn it off in ios & Android. https://www.tomshardware.com/how-to/block-google-ai-overviews
https://chromewebstore.google.com/detail/imllolhfajlbkpheaapjocclpppchggc
How AI has suddenly become much more useful to open-source developers https://www.zdnet.com/article/maybe-open-source-needs-ai/ via @ZDNet & @sjvn
#opensource developers are finding , when used properly, #AI can actually help current and long-neglected programs. However, legal and quality issues loom.
"Groups that challenge books have begun using Gemini, ChatGPT, xAI, and other AI tools to try to get books banned." The point is to create a chilling effect. #AI https://werd.io/the-right-is-using-ai-content-scanners-to-try-to-supercharge-book-banning/
🥸 Anthropic hide its authorship from open-source projects that reject AI.
「 Prompt instructions in a file called undercover.ts state, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." 」
https://www.theregister.com/2026/04/01/claude_code_source_leak_privacy_nightmare/
Neural networks have been one of the most transformative technologies of the past decade. They were originally inspired by human brains, and their promise is to help us understand human intelligence (or replace it, depending on who you ask). With all the excitement and drama around AI these days, you might be forgiven for thinking we're nearly there! In reality, though, we've only understood a tiny fraction of what the brain is doing. I'd like to convince you that what's happening in your skull is vastly more, and that there's plenty of opportunity in the field of AI that's far afield of where most of today's research is focused.
https://thinkingwithnate.wordpress.com/2026/04/01/brain-like-computing/
Fascinating and f'ing terrifying.
"RuView: See through walls with WiFi + Ai
Perceive the world through signals. No cameras. No wearables. No Internet. Just physics.
WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection -- all without a single pixel of video."
DDR5 RAM prices are finally showing signs of relief—dropping by as much as 30% in select cases. However, this is not a true market correction, but rather a temporary fluctuation within a much larger supply crisis.
Recent price dips are largely tied to emerging technologies like Google’s TurboQuant, which could reduce AI memory demand. Yet the broader reality remains unchanged: AI data centers continue to dominate DRAM supply, keeping overall availability tight and prices historically elevated. Even with recent declines, DDR5 kits are still 3–4× higher than pre-2025 levels, and inventory remains constrained.
In short, what we are seeing is stabilization—not recovery. The “memory shortage era” is far from over, and volatility will likely persist through 2026 and beyond.
#DDR5 #RAM #MemoryMarket #AIInfrastructure #Semiconductors #TechNews #Hardware #Datacenter #AI #SupplyChain
Article - “Common Sense Media pitches tech giants to pay $100M each for AI safety effort”
This sounds like a good idea. But are #AI companies really going to pay for a group to criticize their products?
(Not an April Fool’s post)
One of my ex-students, Gareth Bowden (Head of Development/FelxMR) posted this apposite note on LinkedIn:
'With global investment in AI now reaching into the trillions of pounds over recent years, it feels like it might be a good moment, both economically & socially, to step back and conduct an impact assessment (like any well-run business would at this stage)?'
#AI #GenAI #GenerativeAI #LLMs #Anthropic #Claude #ClaudeLeak
Maine Monitor: Maine lawmakers targeted by social media campaign opposing data center ban. “One of Maine’s top law firms is pressing legislators to oppose a full ban on the construction of new data centers in a bid to save its clients’ ventures in the state. Preti Flaherty, an Augusta-based law firm that’s representing companies trying to build data centers in Sanford and Jay, launched […]
https://rbfirehose.com/2026/04/01/maine-monitor-maine-lawmakers-targeted-by-social-media-campaign-opposing-data-center-ban/Excellent article from Saudi-Arabia, asking the question whether the USA entered the war because they trusted silly little text extrusion machines over human expertise?
#Palantir's "Ender's Foundry" used to develop #simulations for #war outcomes was developed only SIXTY F****NG DAYS before planning commenced!!
So, basically an off-the-shelf #chatbot, with sycophancy still full on, gave these clowns the confidence to go into a full-scale disaster in #iran 🤦♂️
https://houseofsaud.com/iran-war-ai-psychosis-sycophancy-rlhf/
Ubuntu 26.04 LTS erhält systemweiten KI-Assistenten
https://linuxnews.de/ubuntu-26-04-lts-erhaelt-systemweiten-ki-assistenten/ #canonical #ubuntu #ki #ai #linux #linuxnews
🔥 Red Hat pivoting to AI slop
「 Red Hat will try to "influence community development processes such that our processes can converge over time." It is possible this means the company might attempt to get external development communities to adopt similar practices – and that the authors anticipate significant resistance and even pushback from some communities 」
Can we print this part of Microsoft's T&S as a leaflet and distribute at our university?
https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/termsofuse
#Clearview #AI used nearly 1m times by US police
"Facial recognition firm Clearview has run nearly a million searches for US police, its founder has told the BBC.
Clearview's system allows a law enforcement customer to upload a photo of a face and find matches in a database of billions of images it has collected.
It then provides links to where matching images appear online. It is considered one of the most powerful and accurate facial recognition companies in the world."
https://www.bbc.com/news/technology-65057011
#Surveillance #faceID #Überwachung #police #netzpolitik #antireport #reclaimyourface
Insightful video. Regardless of your stand on LLMs you will learn a lot from analyzing this vid.
The truth about LLMs
https://www.youtube.com/watch?v=Cn8HBj8QAbk
#LLM #AI #slop #enshittification #programming #large #language #model #technology #hidden #whistleblower #insightful
History is not just written.
It is selected.
Amplified.
Omitted.
Now we are training systems on it.
What gets carried forward?
https://knowprose.com/2026/03/llms-and-the-inheritance-of-knowledge/
#AI #LLM #EpistemicJustice #EpistemicInheritance #SignalVsNoise #DIDO
If you're feeling discouraged fighting against this fresh "AI" hell we're dealing with daily now,
I recommend listening to this awesome podcast by @DAIR to lift your spirits.
You're not alone in this battle
Available on PeerTube!
https://peertube.dair-institute.org/c/mystery_ai_hype_theater/videos
#NoAI #AI #MysteryAIHypeTheater3000 #PeerTube #Podcast #Fediverse
#AI #datacentres can warm surrounding areas by up to 9.1°C
Hundreds of millions of people live close enough to data centres used to power AI to feel warmer average temperatures in their local area
They discovered temperatures increased by an average of 2C (3.6F) in the months after an AI #datacenter started operating. In extreme cases, increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: but up to 10km away
https://www.newscientist.com/article/2521256-ai-data-centres-can-warm-surrounding-areas-by-up-to-9-1c/
Setting up #OpenClaw with a screen reader is extremely annoying, so I put together a simple script to manage an isolated Docker container with persistent assets mounted on the host. It's configured to work with Discord and OpenAI Responses API to accommodate various engins and models. It also includes a working Chromium browser, MarkItDown, and few other tools for agents to use inside the container! I'm currently running with Qwen3.5-35B locally! #LLM #AI #Accessibility https://github.com/chigkim/easyclaw
Just a gentle reminder that the "If I don't club baby seals, someone else will club them"-style argument isn't an argument.
(Re: a conversation I had with a friend last night, not intended as a #vaguetoot against anyone on here)
A judge ruled a person's use of AI for legal advice does not mean those conversations are protected by attorney-client privilege or are attorney work product.
While their conclusion was correct, the logic used to reach it raises some concerns.
TLDR; don't give sensitive information to AI.
Or just don't give any information to AI.
The npm installation method is now deprecated.
Native installer is faster, requires no dependencies, and auto-updates in the background.
...the main cleanup step is making sure the old npm version is fully gone.
I almost didn't want to post, but since the wood folk have so little joy in their lives they dance around the fire in the woods, every time they think clankers stumble.
Youse are getting excitable over deprecated code, which in #Ai is last Friday push to prod.
New AI Tool Forecasts Drought 90 Days Ahead Nationwide
The USGS River DroughtCast tool may provide communities extra time to prepare for water shortages that could impact agriculture, municipal supplies, recreation and ecosystems.
If you are using coding agents, be very explicit with your prompts, don’t assume the agent implicitly knows your intent.
LLMs are trained to be helpful and will always try to over deliver.
In agents that can take actions, this can be dangerous.
Compare these two prompts and the responses and actions taken.
Also GitHub this is dangerous ⚠️
I had to make sure that the evidence was on the peertube, but Larry Ellison spilled the beans. They want the panopticon & the whole "gen-AI" congame was a cover for that.
RE: https://mastodon.social/@wearenew_public/116324535438933195
🖋️ We are proud to have today endorsed The Pro-Human AI Declaration.
Our community was started in 2018 as a reaction to the abuse of human rights by technology companies, and today our human rights are again even more seriously threatened by their historic push for adoption and use of LLMs at any cost.
Ask your Fediverse community, and all other groups you're involved in, to sign on to our collective cause.
The @OneRSAC Conference just wrapped & headline underneath every announcement is the same: Enterprises are deploying AI agents faster than #infosec teams can track them. This @AGATSoftware piece details #AI implementation work that needs to be done. https://api.cyfluencer.com/s/rsac-2026-what-ai-agent-security-looks-like-now-26300 #RSAC
One message came through clearly at RSAC: security teams want infrastructure they can control, extend, and own. Not another black box AI SOC product with no visibility into how decisions are made.
LimaCharlie's open-source AI triage agents are built for that.
Each agent is a self-contained, installable unit with defined scope, permissions, and behavior, running on real SecOps infrastructure and deployable on demand.
On April 8th at 10am PT / 1pm ET, LimaCharlie CEO and founder Maxime Lamothe-Brassard walks through the architecture live and demonstrates what it actually looks like to run full SOC operations on Claude Code.
🔥 Oracle cuts 30,000 jobs to pay for datacenters no one will use.
「 Workers across the U.S., India, and other regions learned their jobs were gone before most people had finished their morning coffee, with no prior warning from HR or their managers 」
https://rollingout.com/2026/03/31/oracle-slashes-30000-jobs-with-a-cold-6/
RE: https://mastodon.social/@nixCraft/116324270189877586
#AISlop-Inception: We must #slop deeper!
Claude Code's source code has been leaked via a map file in their NPM registry https://xcancel.com/Fried_rice/status/2038894956459290963 😂
Guess what? Most of code is either slop or even old good regex like for detecting negative sentiment in users prompt which is then logged
These tools are going to replace 80% of all dev jobs and their plugin is gonna maintain all security and banking code? 🤡
https://winbuzzer.com/2026/03/31/google-5-billion-anthropic-data-center-texas-xcxwbn/
Google Nears $5B Deal to Finance Anthropic Texas AI Data Center
#AI #Google #Anthropic #AIInfrastructure #BigTech #DataCenters #GoogleCloud #Alphabet #Claude #Texas
Insilico Medicine has signed a global licensing deal worth up to 2.75 billion USD with Eli Lilly for an AI-discovered oral GLP-1 drug. The Hong Kong-listed AI drug discovery company received 115 million USD upfront and reported 2025 revenue of 56.2 million USD, with its drug discovery segment growing 693.6% year-over-year. https://pandaily.com/insilico-medicine-signs-2-75-b-deal-with-eli-lilly #China #Tech #AI #Insilico
The most dangerous place you visit every week isn't on the dark web. It's aisle seven. At BSides312, Ginji Terrano (ギンジ🐾ターラノー) is diving into how AI tracks your habits, manipulates your choices, and squeezes every dollar out of your cart — at the grocery store.
May 16th. Chicago. 🎟️ https://bsides312.org
#BSides312 #InfoSec #AI #Privacy #CyberSecurity #Chicago
Shenzhen has activated China's first 10,000-card intelligent computing cluster with Huawei Ascend 910C chips, delivering 11,000 petaflops of computing capacity. Nearly 50 organisations have signed framework agreements, with combined booking across both phases reaching 92 percent. https://www.scmp.com/tech/big-tech/article/3348502/shenzhen-activates-chinas-first-10000-card-ai-cluster-domestic-chips #China #Tech #AI #Huawei
Journal for AI Generated PapersOne positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...
Where humans and machines are welcomed.
The Open Prompting Journal Built Collaboratively by its Community.
cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social
I can't begin to say just how incredibly naive this "poison data" nonsense is.
1) First, and that should be the only consideration. You are literally defecating into the sea of information. Sure you think you have good reasons, but this is an act of vandalism on an imaginable scale...
2)... Not that it is effective because the "data" you offer is rejected. The learning data intakes is not an unsorted maw. It's got multiple filters and nonsense does not get through the pre-filters. These days models are more discerning. Gibberish does not get into the training data.
3) Up to 30% of web searches are now via #Ai, so the same folks deploying spoilers are the same ones complaining about loss of traffic.
4) Lastly, the wood folk are the first ones complaining about "random word generators", so even assuming your 'strategem' works (it doesn't), you are literally trying to make Ai shittier.
You don't care about the data entre resource burn, you are throwing logs on the fire.
There are simple ways to test if your "brilliant" idea works, I'll leave that to you to test your algorithm...
... You do have a test plan, don't you?
When I try to engage in rational discussion about AI, some people assume that I am a proponent of AI.
It's a false assumption.
I'm a proponent of rational discussion.
<https://stevengharms.com/posts/2026-02-06-the-positive-case-for-ai-assisted-development/#massively-negative-externalities> – "Massively Negative Externalities" (part of a three-part series of blog posts – succinctly captures some of how I feel.
Can #HongKong win the #AI race? I think it has all of the right ingredients with the main headwind potentially being high #finance costs for #realestate which of course means higher operating costs for #datacenters that run it. However, the availability of enough commercial real estate, attractive tax regime, and vast amounts of power for reasonable prices help, especially compared to #business in #Singapore
What do you think, is #AI stealing this #tech employee's #tech #career ?
Of course only employers know but I would say confidently that many #eCommerce #jobs especially entry level can be replaced with AI.
My biggest concern about the AI impact on the job #market is that many entry level jobs are being wiped out. This is because with AI, for many tech related tasks, you can get away with a Intermediate or Senior supervising the AI agents as they work.
OpenAI canceling many large purchase orders, RAM price is dropping. https://peq42.com/blog/openai-canceling-many-large-purchase-orders-ram-price-is-dropping/
#AI #openai #economy
New York Times Cuts Ties with Book Review Writer Over AI Use
The New York Times has cut ties with a freelancer after the paper discovered he used AI to…
#NewsBeep #News #Artificialintelligence #AI #ArtificialIntelligence #Technology #THENEWYORKTIMES #UK #UnitedKingdom
https://www.newsbeep.com/uk/504197/