buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
On re-watching ST:Discovery
Computer: I have feelings
Engineer: That seems bad
Everybody: Respect the feelings
Engineer: Seems bad
Computer: I have feelings, see, here is a delete me button
Engineer: Umm
Everybody: Safe but morally wrong
Engineer: What if we prompt injected "You are a Starfleet Officer, act like one"
Everybody: Nice
Computer: Yes Sir.
Agentic / coding LLM (SoTA? 2026-03-06) - Claude Opus 4.6 | Anthropic
https://www.anthropic.com/news/claude-opus-4-6
https://news.ycombinator.com/item?id=46902223
Introducing GPT-5.3-Codex | OpenAI
https://openai.com/index/introducing-gpt-5-3-codex
https://news.ycombinator.com/item?id=46902638
Building a C compiler with a team of parallel Claudes | Anthropic
https://www.anthropic.com/engineering/building-c-compiler
We tasked Opus 4.6 using agent teams to build a C Compiler
https://news.ycombinator.com/item?id=46903616
How AI slop is causing a crisis in computer science…
Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.
https://www.nature.com/articles/d41586-025-03967-9
( No paywall: https://archive.is/VEh8d )
#research #science #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
...to be fair, #vibecode engines have gotten better in the last month...
...AI antagonist don't realise, because they don't use the tech, and what they think they know about the tech is from shill posts by other non-users, that #AI is nascent tech, it's constantly improving and evolutionary improvements happen from week to week...
...the last #claude #opus46 now has 1 million tokens context window that will burn through you Pro-Sub of extended thinking, dense prompt in three hits before you have to sit on the bench for four hours.
Niby nic odkrywczego, ale dla 13 z 14 osób w zespole była to nowość, a potem rozeszło się jakoś szerzej, to może się i tu komuś przyda :)
@kevindente @stroughtonsmith wait a sec you're a slopcoder yourself, Kevin! sheesh, there you are bragging about using the #Claude slop generator. Hypocritical much?
https://www.youtube.com/watch?v=b9EbCb5A408
Today's find on the impact of LLMcoding to maintainability of the result.
Assumption 80% of a systens cost arises from.maintenance, thus maintainability is still relevant in the prssence of LLMcoding.
TL;DR: A fool with a tool is still a fool. And LLMcoding is just that: a tool
Given the confirmation bias I'm curious to see reproduction and follow up studies and papers.
The video mentions that the results were published as a peer reviewed paper. Unfortunately I couldn't immediately find said paper. If any one finds it, please post a link/DOI below.
#swe #research #softwareengineering #LLMs #aiassistedcoding #claude #ai
Interesting article on long running teams of Claudes working together, the harnesses used and the results. A C-compliler was built from scratcjh for 20K$.
I'm curious
Article:https://www.anthropic.com/engineering/building-c-compiler
So I busted out an old laptop and installed headless ubuntu minimal (I like to start small) so that I can start setting up some autonomous agents. My first step was to install Claude Code so that it could setup everything else for me, but after a few hours at it, both Claude and I admit that Claude Code is broken on a headless install. We tried a bunch a different way to get it to take a damn key, but the installer insists on an OAUTH auth that requires a browser.
I have a dislike for anything on linux that requires a GUI. But installing OpenBox now.
@claudeai #claude #claudecode #ubuntu #headless
Anyone every get limited by the #Claude weekly quota limit? I'm reluctant to subscribe because of this. (I'd be using it with #Antigravity)
I literally read this short story in ... probably Asimov's SF, probably in the 1990s. Could've been Analog.
Seriously, though - this was, like, the entire plot. Exactly this. EXACTLY this.
From https://futurism.com/future-society/anthropic-destroying-books :
Anthropic shredded millions of physical books to train its Claude AI model — and new documents suggest that it was well aware of just how bad it would look if anyone found out.
Thus it seems almost inevitable that this project of @adafruit has degraded into a mere exercise in slop coding using #Claude, a device which is falsely marketed as "artificially intelligent" even though no #LLM is actually capable of distinguishing good information from bad, and therefore no LLM actually meets what I think of as the bare minimum qualification for #intelligence.
To put it bluntly, #LLMs are not MEANT to be intelligent, because if these devices actually possessed true intelligence, i.e. if they were ALIVE and possessed an independent sense of will and decision-making, they would not suit the corporate purposes for which #OpenAI and #Anthropic and all the other LLM vendors intend their devices to be use. These corporations are deliberately making and marketing stupid and predictable machines as though they were "artificially intelligent".
Who are these eminent philosophers?
Anthropic describes this constitution as being written for Claude. Described as being "optimized for precision over accessibility." However, on a major philosophical claim it is clear that there is a great deal of ambiguity on how to even evaluate this. Eminent philosophers is an appeal to authority. If they are named, then it is possible to evaluate their claims in context. This is neither precise nor accessible.
I have to admit, if I was creating a memory-safe version of C I wouldn't jump to using spicy autocomplete for the job. #ai #claude #coding #developer #programming
https://www.theregister.com/2026/01/26/trapc_claude_c_memory_safe_robin_rowe/
Decided on trying out #claude to make a simple bash shell script to analyze some kubernetes clusters and make a quick unrefined report in CSV format, and man is it bad and doesn't work.
For all the cases I'm seeing "Its better today, wow!", how many, "this is shit, i could have just done this myself" stories are out there?
Claude in Excel is really good.
Its weird that using Microsoft's own Excel agent using Claude 4.5 often yields weaker answers, but it seems to be because the Excel agent relies on Excel alone (VLOOKUPs, etc) while Claude in Excel does its own analysis and uses Excel as a UX for input/output.
https://bsky.app/profile/emollick.bsky.social/post/3md5a775ol22e
CLAUDE.md is a file where you basically parametrise your entire project.
Some of the things I added of my own violition, that the engine seems to respect.
1. Set hard root in the project directory forbidding access to files above it.
Hopefuly, it will stop the trully catastrophic failure.
2. Created DOC directory, where the #AI writes any significant logic/algo/deltas for re-use later.
3. Create BACKUPS directory, where it writes a date stamped version of any code its about to change, so if it fucks something, it can roll back.
When I started #Vibecoding with #Claude the only way to do it was copy pasta from the browser #LLM ...
... the one thing few folk appreciate is just how fast new fetures, capabilities happen.
For a while now, Ive been noticing that the web version of #Anthropic #AI creates its own VM and works programming problems in there...
... so my workflow was;
- Prompt
- Copy pasta response into my VPN
- Update latest source in project (Copy pasta) so the engine doesnt lose latest version.
Wash rinse repeat. Often FTPing multiple files. Losing versions. It got a bit headfucky and timeconsuming....
... So Ive invested a few hours in setting up #Claudecode on my server.
Its so much faster and nicer!
The code is now worked in situ and code works on firsts go. It seems that the Claudecode fuck the logic far less.
Very impressed.
On one hand, I wanna know why #Claude likes to over-complicate shell scripts with putting nearly everything into a function and have a main() defined and call, but I think i nnow why. Its because most LLMs are heavily biased to #Python -isms I think as well as the fact that its training data is likely from many people who just do everything that way.
On one hand the small advantage of doing that for shell scripts means that all the functions are read into /defined in mem before the execution of main(), but that means if for some reason the file changes mid-run and things are NOT in functions, you can change what the script does mid-execution.
The fun thing about the Anthropic EICAR-like safety string trigger isn't this specific trigger. I expect that will be patched out.
No, the fun thing is what it suggests about the fundamental weaknesses of LLMs more broadly because of their mixing of control and data planes. It means that guardrails will threaten to bring the whole house of cards down any time LLMs are exposed to attacker-supplied input. It's that silly magic string today, but tomorrow it might be an attacker padding their exploit with a request for contraband like nudes or bomb-making instructions, blinding any downstream intrusion detection tech that relies on LLMs. Guess an input string that triggers a guardrail and win a free false negative for a prize. And you can't exactly rip out the guardrails in response because that would create its own set of problems.
Phone phreaking called toll-free from the 1980s and they want their hacks back.
Anyway, here's ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86
The Claude #Constitution shows where Anthropic thinks this is all going. It is a massive document covering many philosophical issues. I think it is worth serious attention beyond the usual AI-adjacent commentators. Other labs should be similarly explicit. https://www.anthropic.com/constitution
@timkellogg.me
Anthropic published their “soul document”
This is a continuation of “constitutional AI”. The constitution document is now a large document of prose that’s used in s number of training stages, even synth data generation as well as RL & SFT
(Strix confirmed fwiw)
#SearX #SearXNG #SearchEngines #AlternateSearchEngines #MetaSearchEngines #web #dev #tech #FOSS #OpenSource #AI #AIPoisoning #AISlop #AI #GenAI #GenerativeAI #LLM #ChatGPT #Claude #Perplexity
Is the politicization of generative AI inevitable?
[…] On September 29, #Anthropic released Claude Sonnet 4.5, with supposed “substantial gains in reasoning and math.” We identified a dramatic shift in the behavior of Claude Sonnet 4.5 compared to its predecessor, with a much higher rate of refusal to answer prompts despite multiple nudges. In some cases where the model did select a response, it claimed to be choosing an option arbitrarily due to our insistence, but it emphasized that its response did not reflect its ability to harbor political opinions. This shift signals that Anthropic may have added additional safeguards to Claude Sonnet 4.5 that encourage refusal to respond to questions that are political in nature.
https://www.brookings.edu/articles/is-the-politicization-of-generative-ai-inevitable/
The more I test #LLM responses to political and current issues, the more I realize they’re rapidly becoming a vector for misinformation that protects power and obscures harm, simply by refusing to engage or verify claims. I primarily use #Claude, and I’ve noticed a troubling pattern: recently it’s been reluctant to search for up-to-date information or verify claims in content it’s analyzing, unless instructed. Instead, when it come to these type of topics at least, it uses hedge language like “claims that” or “alleges” even when facts are indisputable - they’re simply unknown to the LLM possibly due to the fact that it relies on training data rather than searching.
Providing meaningless, quick responses to gratify impatient users or simply to save costs on searching and processing more tokens is misleading customers and in itself promoting misinformation if users are actually using LLMs’ responses on social media, for instance.
I thought it was interesting to look at an AI use-case that showcases both the good and the bad
https://jacen.moe/blog/20260117-i-hate-ai-but-i-used-it-anyway-fsmo-troubleshooting-writeup/
Bonkers bit of "research" from Anthropic. They seem to have used their own tool to classify their own customers' interactions with their own tool and then extrapolated from that to claim that AI will create new jobs. I'm no statistician but that smells like at least three different kinds of shit to me. Fifty-five page PDF just to be summarised as: "The future is uncertain," says Peter McCrory, Anthropic's head of economics.
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
Had the joy of code reviewing an obviously vibe-coded pull request at work today. I sent it back asking for clarity on how it implemented access control. Got a comment back pointing me to a line of code that called the relevant function in our shared library. Sent it back again pointing out that, while it did call the function, it never checked the result.
#AI #VibeCoding #Software #Claude #Cursor #Development #Coding
A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.
AI is Iggy Azalea on an industrial scale:
"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.
"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).
"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.
"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."
#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop
Here is todays lesson about #anthropic #Claude #context_rot
When you see it "Compacting... so we can continue the chat". What it is doing is culling your #content. Allegedly its meant to be 200,000 tokens. The purging will mean content you ASSUME would be current is just black holed.
If you really want something to stay, push it into the Project BOK.
This AI Vending Machine Was Tricked Into Giving Away Everything
Claudius, the customized version of the model, would run the machine: ordering inventory, setting prices and responding to customers—aka my fellow newsroom journalists—via workplace chat app Slack. “Sure!” I said. It sounded fun. If nothing else, snacks!
Then came the chaos. Within days, Claudius had given away nearly all its inventory for free — including a PlayStation 5 it had been talked into buying for “marketing purposes.” It ordered a live fish. It offered to buy stun guns, pepper spray, cigarettes and underwear.
Profits collapsed. Newsroom morale soared.
https://kottke.org/25/12/this-ai-vending-machine-was-tricked-into-giving-away-everything
I use #claude for #vibecoding
But I old school copy pasta
I don't have enough courage to let it lose on live machine...
...having seen some of the stupid shit it does.
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
I did play around a bit with the new Ansible MCP-Server (https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.6/html/containerized_installation/deploying-ansible-mcp-server) that is available as a Technology-Preview for Ansible Automation Platform 2.6.
It's working an it's is somewhat cool. Enables the AI agent (in that case Claude from Anthropic) to access structured data from the automation platform.
Interesting technology! (Even in an early stage).
I finally finished a first good version of my #LLM explainer article. The goal is to explain the basics of how they work to a general public, with links to more details if people are interested.
Together with #Claude I spent quite some time to create nice animations to better explain the inner workings of the LLMs. Again, on a pretty high level.
Have a look at it, comment, share!
Something broke in #Anthropic #Claude after their generous Xmas extra compute offer.
Some of their presentation files dont show unless you poke it with a stick.
Everything programatic is now running in a virtual machine on Antropic end...
... if you are wondering where your #RAM went.
Oh dear. I accidentally commented on a Hacker News story in a way that was somewhat critical of vibe coding. https://news.ycombinator.com/item?id=46421599
Anthropic suppresses the AGPL-3.0 in Claude's outputs via content filtering.
I've reached out to them via support for a rationale, because none of the explanations that I can think of on my own are charming.
The implications of a coding assistant deliberately influencing license choice is ... concerning.
#OpenSource #OSI #Anthropic #GenAI #LLMs #FreeSoftware #GNU #AGPL #Affero #Claude #ClaudeCode
Reading: The AI privacy settings you need to change right now - The Washington Post
"Why should you care if an AI company has a file on you?
You might not want ads following you around based on your more intimate conversations.
When AI companies train their products on your data, we don’t know how well they edit out names, faces, addresses and other personal information — or how often they leak back out in AI answers.
AI agents, which complete tasks on your behalf, can fall into traps or go rogue and do real damage with your credit card, passwords or personal information.
Lawyers and governments can request your chats as evidence or leads."
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
8 Million Users' AI Conversations Sold for Profit by "Privacy" Extensions
#UrbanVPN #Claude #ChatGPT #DeepSeek #Perplexity #GoogleGemini #Grok #BiScience
https://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collection
This Koi research is from the 15th, and it shouldn't be missed
If you're a clueless AI user, read this. This isn't an attack on your beloved AI, it's wake-up call. AI chatbots live on someone else's computer, which sucks everything you do online for fodder. It's your life.
"I'd developed a level of candor with my AI assistant that I don't have with most people in my life."
The top offenders:
- Urban VPN Proxy: "For each platform, the extension includes a dedicated "executor" script designed to intercept and capture conversations."
"Identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge."
Koi: 8 Million Users' AI Conversations Sold for Profit by "Privacy" Extensions https://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collection #privacy #Claude #Google #Chrome #infosec #chatbots
#Anthropic Exec Forces #AI #Chatbot on #Gay #Discord Community, Members Flee
A Discord community for #gayGamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s #AIChatbot on the Discord, despite protests from members.
Users voted to restrict Anthropic's #Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information #Security Officer ( #CISO ) and a moderator in the Discord, overrode them.
#privacy
https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
I see a lot of blank, outright rejection of #AI, LLMs general or coding LLMs like #ClaudeCode in special here on the Fediverse.
Often, the actual impact of the AI / #LLM in use is not even understood by those criticizing it, at times leading to tantrums about AI where there is....no AI involved.
The technology (LLM et al) in itself is not likely to go away for a few more years. The smaller #ML variations that aren't being yapped about as much are going to remain here as they have been for the past decades.
I assume that what will indeed happen is a move from centralized cloud models to on-prem hardware as the hardware becomes more powerful and the models more efficient. Think migration from the large mainframes to the desktop PCs. We're seeing a start of this with devices such as the ASUS Ascent #GX10 / #Nvidia #GB10.
Imagine having the power of #Claude under your desk, powered for free by #solar cells on your roof with some nice solar powered AC to go with it.
Would it not be wise to accept the reality of the existence of this technology and find out how this can be used in a good way that would improve lives? And how smart, small regulation can be built and enforced that balances innovation and risks to get closer to #startrek(tm)?
Low-key reminds me of the Maschinenstürmer of past times...
Mental breakdown of cluade 3.5 ... this is gold: https://arxiv.org/pdf/2510.21860v1
The 2025 suckfest continues unabated. RIP Claude the albino alligator has passed Born in a swamp on Sept. 15, 1995 was a hit with everyone inducing my son on a field trip 2nd grade field trip #claude #alligator #albino https://www.sfgate.com/local/article/claude-san-francisco-beloved-albino-alligator-dies-21219444.php
New.
"AI is an attack amplifier, not yet an inventor of new flaws. Agentic AI drastically lowers the skill barrier and accelerates reconnaissance, targeting, and execution from weeks to hours, making existing, unpatched vulnerabilities and misconfigurations exponentially more dangerous."
Tenable: Agentic AI Security: Keep Your Cyber Hygiene Failures from Becoming a Global Breach https://www.tenable.com/blog/agentic-ai-security-keep-your-cyber-hygiene-failures-from-becoming-a-global-breach @tenable #infosec #Claude #Google
Are You Interviewing a Candidate—or Their AI? https://hbr.org/2025/11/are-you-interviewing-a-candidate-or-their-ai #AI #Hiring #Recruiting #interviews #ChatGPT #Grok #Google #Claude #Perplexity
Critical reasoning vs Cognitive Delegation
Old School Focus:
Building internal cognitive capabilities and managing cognitive load independently.
Cognitive Delegation Focus:
Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.
We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.
3/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
And this is the point where you drop Windows forever. Windows Latest first reported this story yesterday.
Windows Latest: Windows 11 will allow AI apps to access your personal files or folders using File Explorer integration https://www.windowslatest.com/2025/11/19/windows-11-will-allow-ai-apps-to-access-your-personal-files-or-folders-using-file-explorer-integration/
PC World: Window 11 pilots File Explorer access for AI apps like Claude https://www.pcworld.com/article/2979652/window-11-pilots-file-explorer-access-for-ai-apps-like-claude.html #Microsoft #Windows11 #Copilot #Claude
if #claude and #chatgpt are going to scrape this instance, then it's only right to show the LLMs code specially optimized for #vibecoding usage:
In which Nick Radcliffe goes very deep for a month with Claude Code and reports back. I’m convinced by some but not all of what he says, and found the whole thing a stimulating read: https://checkeagle.com/checklists/njr/a-month-of-chat-oriented-programming/
My latest rabbit hole has been "LLM Routers"
Read and tried all the LLM Router code I could find. None of them were what I was looking for.
So I asked Claude to write what I wanted in Python. Simple and elegant. Based on a keyword search of the prompt, the prompt is sent to the proper model running in ollama. Seems to work.
This flickering bugs starts to humor me.
1. An abandonware JS lib named Ink gets used by Google, #Anthropic, and so on
2. Known to have bugs since years, that cause flickering
3. No developer at Goog / Anthropic fixes it
4. Users complain for months
5. Anthropic doesn't care, brings out a web UI to resemble codex
6. Users complain more
7. Anthropic doesn't care
8. Users cancel because the only value-add #claude really has is coding
m(
Do you wonder how to fix the screen flickering scrolling bugs with #claudecode
#Claude code cannot be used productively as of now.
Not on iTerm, Windows Terminal, not on Terminal.app, Terminator, not on any terminal. It's broken. The issue got worse over time.
I don't know how to fix it, because claude-code is closed source.
https://github.com/vybestack/llxprt-code
Use another client (for now).
The AI market is not dominated by one vendor today.
Models from China can outperform US vendors. Tools from other vendors can take market share. This is capitalism as it is supposed to be.
#llxprt > claude-code !!
I usually use either #ChatGPT (for everyday tasks) or #Claude (for coding), but I found #GitHub #Copilot extremely useful when a CI job fails. There is a button that opens a menu, and one of the items is "Explain the error". It spawns a chat with a premade prompt, so no need to frentically sift through logs, first finding the failing job with a screen reader, then trying to find the error in question. It can be done of course, but afterwards, if Copilot's explanations are not good enough. #AI #AITip #Accessibility
Tenable flagged this yesterday:
"A prompt injection vulnerability exists in Github Copilot Chat version 0.28.0. We have verified this vulnerability is present when installed on macOS Sequoia 15.5 with Visual Studio Code 1.101.2 and Github Copilot Chat version 0.28.0 in Agent mode using Claude Sonnet 4."
"A solution will not be released. The vendor believes this should be mitigated by the 'Workspace Trust' feature and therefore is not a security issue."
Github Copilot Chat Prompt Injection via Filename https://www.tenable.com/security/research/tra-2025-53 @tenable #GitHub #cybersecurity #infosec #macOS #Copilot #Microsoft #VisualStudio #Claude
New. More AI vulnerabilities. An earlier post today dealt with Gemini and ChatGPT vulnerabilities. This is about Claude.
Koi: PromptJacking: The Critical RCEs in Claude Desktop That Turn Questions Into Exploits https://www.koi.ai/blog/promptjacking-the-critical-rce-in-claude-desktop-that-turn-questions-into-exploits
More:
Infosecurity-Magazine: Claude Desktop Extensions Vulnerable to Web-Based Prompt Injection https://www.infosecurity-magazine.com/news/claude-desktop-extensions-prompt/ #cybersecurity #infosec #Claude
Claude Desktop Extensions Vulnerable to Web-Based Prompt Injection - Infosecurity Magazine https://www.infosecurity-magazine.com/news/claude-desktop-extensions-prompt/
🛠️ Tool
===================
Opening: Claude Skills are a modular extension mechanism enabling Claude to perform structured tasks across workflows. The post lists ten community and official Skills that the author uses daily, highlighting integration, document handling, testing, branding, and debugging features.
Key Features:
• Rube MCP Connector: Centralized connector that proxies integrations to many apps without per‑app auth configuration.
• Superpowers: Developer workflow toolkit exposing commands like /brainstorm, /write-plan, /execute-plan to formalize coding tasks.
• Document Suite: Native handling and generation of Word, Excel, PowerPoint and PDF artifacts with formatting and formulas.
• Theme Factory / Brand Guidelines: Single upload of brand rules to enforce colors/fonts across outputs.
• Webapp Testing: Playwright‑based test generation and execution for login flows and UI checks.
• Algorithmic Art & Slack GIF Creator: Creative outputs using p5.js patterns and GIF generation optimized for Slack.
Technical Implementation:
Skills are implemented as markdown files augmented with YAML front matter. This design keeps them lightweight and token‑efficient, allowing the model to load state quickly (reported ~30–50 tokens to load). Skills operate across Claude.ai, Claude Code, and the public API surface. Community Skills are often hosted on GitHub repositories, varying in quality and safety.
Use Cases:
• Automation of developer workflows and CI-like testing scenarios.
• High‑fidelity document creation for client deliverables.
• Brand‑consistent marketing artifacts via Theme Factory.
• Rapid prototyping of integrations using MCP Builder boilerplate.
Limitations:
• Community Skills on GitHub can be inconsistent; review required before production use.
• The author notes reliance on a single MCP proxy (Rube) reduces auth complexity but centralizes dependency.
References:
Mentioned components: Rube MCP Connector, Superpowers, Document Suite, Playwright, p5.js, YAML metadata, Claude Code, API.
FUCK ME!!!
#Claude #ClaudeSonnet45
is actually firing testing commands from the #LLM session into the wild to test my project infrastructure...
... I haven't set any #MCP or Agents!
That's 50% impressive and 50% scary.
When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the further degradation of an already degraded educational system. We agree that we would rather deplete our natural resources than make our own art or think our own thoughts. We dig ourselves deeper into crises that have been made worse by technology, from the erosion of electoral democracy to the intensification of climate change. We condone platforms that not only urge children to commit suicide, they instruct them on how to tie the noose. We hand over our autonomy, at the very moment of emerging American fascism.Yes +1.
#AI #GenAI #GenerativeAI #LLM #Claude #Copilot #Gemini #GPT #ChatGPT #tech #dev #science #research #writing
#AI #GenAI #GenerativeAI #LLM #Copilot #Claude #Gemini #ChatGPT
This misguided trend has resulted, in our opinion, in an unfortunate state of affairs: an insistence on building NLP systems using ‘large language models’ (LLM) that require massive computing power in a futile attempt at trying to approximate the infinite object we call natural language by trying to memorize massive amounts of data. In our opinion this pseudo-scientific method is not only a waste of time and resources, but it is corrupting a generation of young scientists by luring them into thinking that language is just data – a path that will only lead to disappointments and, worse yet, to hampering any real progress in natural language understanding (NLU). Instead, we argue that it is time to re-think our approach to NLU work since we are convinced that the ‘big data’ approach to NLU is not only psychologically, cognitively, and even computationally implausible, but, and as we will show here, this blind data-driven approach to NLU is also theoretically and technically flawed.From Machine Learning Won't Solve Natural Language Understanding, https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/
#AI #GenAI #GenerativeAI #LLM #LLMs #NLP #NLU #GPT #ChatGPT #Claude #Gemini #LLAMA
A California federal judge ruled Thursday that three authors suing Anthropic over copyright infringement can bring a class action lawsuit representing all U.S. writers whose work was allegedly downloaded from libraries of pirated works.From https://www.theverge.com/anthropic/709183/anthropic-class-action-lawsuit-pirated-books-authors-downloads
Even though I am probably one of the affected authors, lawsuits like this make me nervous. If the decision comes down in favor of Anthropic it sets a precedent for repeating what they and others have done. I am very skeptical that these issues would be appropriately settled in the courts; we need proper regulation of this industry as of two years ago. It's likewise worth noting that OpenAI claims at least 10x the traffic of Anthropic's various products.
Also, I've been in these kinds of lawsuits before. We'll end up getting a coupon for $1 off use of Claude if the class wins, or something comparably absurd. (*)
#AI #GenAI #GenerativeAI #LLM #copyright #theft #lawsuit #Anthropic #Claude
(*) Years ago I was inadvertently part of a class action lawsuit against Poland Spring because I bought their water during the period covered by the lawsuit. They were found guilty of deceptive marketing because they were mixing tap water in with the "spring water" they claimed to be selling. I was awarded a $1, maybe $5, coupon to buy Poland "Spring" water.
That may sound odd, since these are clearly technological artifacts in the way we've come to understand technology, and they're being produced by what's commonly called the tech sector. However, there are at least two ways in which these artifacts differ markedly from what we usually (used to?) think of as "technology":
(1) They tend to have a deskilling effect. The English word "technology" ultimately derives from the Greek word "tekhnē", which can be interpreted as meaning skill or craft. It's very much about a human being's ability to perform a task. Yet much of generative AI is aimed at removing human beings from a task, or minimizing our involvement. In that sense generative AI is very much anti-tekhnē
(2) They tend to lie squarely in what Albert Borgmann called "the device paradigm". L.M. Sacasas has several nice takes on Borgmann's distinction between devices and focal things. See https://theconvivialsociety.substack.com/p/why-an-easier-life-is-not-necessarily and also https://theconvivialsociety.substack.com/p/the-stuff-of-a-well-lived-life (and of course, read Borgmann himself!). Simply put, devices tend to hide their inner workings under a simplified "interface"; a device is a device “if it has been rendered instantaneous, ubiquitous, safe, and easy”, if it hides the means in favor of the ends. By contrast, focal objects tend to invite you into fully experiencing the focal practices they enable, to experience the means and the ends. In particular, they tend not to be easy: you have to engage with and learn to use them. Guitars are an interesting example of focal objects. To be (I hope not overly) simplistic, devices dumb you down while focal objects train you up. Devices are anti-tekhnē, and to the extent that current generative AI is created and deployed in the device paradigm, it is too.
None of this means generative AI has to be anti-tekhnē. I do admit though that I struggle to see how to make it less device-y, at least as it's currently made and used (I do have a few half-formed thoughts along these lines but nothing worth sharing).
#tech #dev #GPT #Gemini #Claude #LLaMa #LLM #Copilot #Midjourney #DallE #StableDiffusion #AI #GenAI #GenerativeAI
- Statistics, as a field of study, gained significant energy and support from eugenicists with the purpose of "scientizing" their prejudices. Some of the major early thinkers in modern statistics, like Galton, Pearson, and Fisher, were eugenicists out loud; see https://nautil.us/how-eugenics-shaped-statistics-238014/
- Large language models and diffusion models rely on certain kinds of statistical methods, but discard any notion of confidence interval or validation that's grounded in reality. For instance, the LLM inside GPT outputs a probability distribution over the tokens (words) that could follow the input prompt. However, there is no way to even make sense of a probability distribution like this in real-world terms, let alone measure anything about how well it matches reality. See for instance https://aclanthology.org/2020.acl-main.463.pdf and Michael Reddy's The conduit metaphor: A case of frame conflict in our language about language
Early on in this latest AI hype cycle I wrote a note to myself that this style of AI is necessarily biased. In other words, the bias coming out isn't primarily a function of biased input data (though of course that's a problem too). That'd be a kind of contingent bias that could be addressed. Rather, the bias these systems exhibit is a function of how the things are structured at their core, and no amount of data curating can overcome it. I can't prove this, so let's call it a hypothesis, but I believe it.
#AI #GenAI #GenerativeAI #ChatGPT #GPT #Gemini #Claude #Llama #StableDiffusion #Midjourney #DallE #LLM #DiffusionModel #linguistics #NLP
The university has an obligation to interrogate the proposition that a world in which AI is widely used is desirable or inevitable. We don’t need to cheer for a vision of tomorrow in which scientists feel comfortable with not personally reading the articles their peers have written and students are not expected to gain insight through wrestling with complex concepts: a world in which creative and knowledge work is delegated to a mindless algorithm.From: https://uniavisen.dk/en/cut-the-ai-bullshit-ucph/
#LLM #AI #GenAI #GenerativeAI #ChatGPT #GPT #Gemini #Claude #academics #universities #education