buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
🧵 …that's the answer to the toot above. Not only that, when coding software, a lot of thought is given to what it is more stable and how it is implemented more safely. Mindlessly letting something rattle together sooner or later results in serious gaps.
»Technical Breakdown: How AI Agents Ignore 40 Years of Security Progress«
📺 https://www.youtube.com/watch?v=_3okhTwa7w4
#ai #vibecoding #itsecurity #breakdown #llm #LLMs #noai #softwareengineering #software #video #youtube #yt #code
🙄
Watch Out: Your Friends Might Be Sharing Your Number With ChatGPT
https://www.pcmag.com/news/watch-out-your-friends-might-be-sharing-your-number-with-chatgpt
"The hottest job in tech: Writing words
The rise of slopaganda is fueling a surprising tech hiring boom."
It's all very fine and well, but you do need some time to research, think, structure your thought and, essentially, tell a story with a beginning, a middle, and an end. I find it very difficult in this media and work environment, where AI has accelerated absolutely everything, that this trend will persist more than one year or two...
"As the job changes and demand for narrative communications and storytellers rises, the number of communications experts able to work under rapidly evolving conditions and with a wide remit may be small, comms experts tell me, leading companies to offer hefty compensation packages in war for the best talent. A similar trend is unfolding among the few people who are AI experts, driving tech companies to offer astounding salaries to poach top talent from rival firms. While not of the same nine-figure caliber, in their own right, creatives are becoming "the high value person in tech now," Birch says.
For much of the tech boom, that high-value person was a software developer. Universities and coding bootcamps rushed to fill employment gaps and train up the next generation of tech workers. Young people were told coding would be a path to a lucrative, stable career. As of 2023, the most recent year the Federal Reserve Bank of New York released data for, computer science recent graduates faced an unemployment rate of 6.1%, while communications majors' unemployment rate sat at 4.5%. The number of open job posts for software engineers dropped by more than 60,000 between 2023 and late 2025, according to data from CompTIA, a nonprofit trade association for the US IT industry. The best defense against automation, some argue, will be a liberal arts degree.
Words might be easy to generate with AI, but good writing isn't ready for automation."
https://www.businessinsider.com/hottest-job-in-tech-writing-words-ai-hiring-2026-2
Dark Visitors - A List of Known AI Agents on the Internet
Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.
#ai #internet #block #LLMs #chatbots #it #security #datascraping #protection #web
----------------
🎯 AI
===================
Executive summary: A Practical Guide for Securing AI Models offers a risk-based, lifecycle-oriented framework for identifying vulnerabilities in AI systems and applying prioritized controls. The document addresses common attack vectors against LLMs and other model types and provides concrete controls for data, model, and infrastructure layers.
Technical details: The guide enumerates specific vulnerability classes, including prompt injection, model poisoning (training-time and supply-chain variants), RAG-related data integrity risks, confidentiality and integrity risks in dataset curation, and attack surface changes introduced by multimodal, RL/agentic, and retrieval-augmented designs. It emphasizes compute and orchestration exposures when serving large models and highlights dataset provenance and screening requirements for sensitive or regulated data.
Analysis: Impact pathways include corrupted training data producing unsafe model behavior, context-layer manipulation via RAG leading to misinformation or data leakage, and exploitation of deployment orchestration to escalate access to model artifacts. The guidance differentiates baseline controls from high-risk model safeguards and calls out sector-specific considerations (for example, biotech and pharmaceutical models handling dual-use content).
Detection: Detection recommendations are conceptual and include telemetry for anomalous data ingestion, integrity checks on model artifacts and dataset versions, monitoring for unusual prompt patterns or API usage, and logging for retrieval sources in RAG flows. The guide suggests mapping telemetry to threat hypotheses (data poisoning attempts, prompt injection probes) and prioritizing alerting based on impact.
Mitigation: Prioritized mitigations cover data provenance tracking and screening, model hardening (input filtering, output validation), access controls and segmentation for model-serving infrastructure, and lifecycle policies for model updates and third-party model components. For high-risk models, the guide prescribes additional governance, review gates, and specialized screening for regulated datasets.
Limitations: The guide is positioned as a prioritized starting set of controls rather than an exhaustive checklist; additional measures may be required depending on architecture, threat exposure, and operational context.
🔹 AI #ModelSecurity #RAG #LLMs #Governance
🔗 Source: https://www.rand.org/pubs/tools/TLA4174-1/ai-security/guide.html
How AI slop is causing a crisis in computer science…
Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.
https://www.nature.com/articles/d41586-025-03967-9
( No paywall: https://archive.is/VEh8d )
#research #science #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AISlop #Fuck_AI #Microsoft #copilot #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
https://arxiv.org/abs/2602.05867v1
The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."
Bwahahahahaha
404 Media: Inspiring: RFK Jr's nutrition chatbot recommends the best foods to insert into your rectum.
This has been said a lot, but it has to be said again:
Please stop calling slop machines 'artificial intelligence'!
It is a marketing term. By framing those machines as intelligent, the companies building them are trying to make us believe that their products are more than stolen data, wasteful hardware, and statistics. But they are not!
We have to educate people what those machines really are, and that starts with taking away the false mystery created by advertising!
--
#LLMs #StopTheSlop
Nice trick:
Some #maths #researchers wanted to test how good #LLMs would perform in finding actually new solutions to research questions. So they took some questions that they had recently solved themselves, but where they hadn't published the results, yet, so the LLM wouldn't be able to find the solution online, yet.
Much to the surprise of no one, the #AI failed, badly.
"A century of tech BS" seems a bit over the top when it's only 2026, but it certainly feels that long.
More, by @lproven in https://www.theregister.com/2026/02/08/waves_of_tech_bs/ #techbs #bullshit #blockchain #ai #llms #llmdroppings
https://www.youtube.com/watch?v=b9EbCb5A408
Today's find on the impact of LLMcoding to maintainability of the result.
Assumption 80% of a systens cost arises from.maintenance, thus maintainability is still relevant in the prssence of LLMcoding.
TL;DR: A fool with a tool is still a fool. And LLMcoding is just that: a tool
Given the confirmation bias I'm curious to see reproduction and follow up studies and papers.
The video mentions that the results were published as a peer reviewed paper. Unfortunately I couldn't immediately find said paper. If any one finds it, please post a link/DOI below.
#swe #research #softwareengineering #LLMs #aiassistedcoding #claude #ai
”Epstein’s world is our world. That’s the darkest revelation of these files. He wasn’t an aberration. He was our culture made flesh. A culture that’s now encoded into 1s and 0s and is growing exponentially baked into the algorithms that power our social media platforms, replicated at scale and fed into the large language models that Epstein’s friends are building which are powering our future.”
—Carole Cadwalladr, We all live in Jeffrey Epstein's world
#epstein #epsteinfiles #llms #ai
Vibe Coding Is Killing Open Source Software, Researchers Argue
‘If the maintainers of small projects give up, who will produce the next Linux?’
Vibe Coding Is Killing Open Source.
According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.
💻 https://www.404media.co/vibe-coding-is-killing-open-source-software-researchers-argue/
#vibecoding #opensource #software #oss #vibe #linux #europe #kill #ai #LLMs #llm #smallprojects #noai #tailwinds
How do you convince people to stop using unethical technology like generative AI?
I wonder if people would listen to my criticisms of #LLMs more if I had a distinguished sounding accent.
I literally read this short story in ... probably Asimov's SF, probably in the 1990s. Could've been Analog.
Seriously, though - this was, like, the entire plot. Exactly this. EXACTLY this.
From https://futurism.com/future-society/anthropic-destroying-books :
Anthropic shredded millions of physical books to train its Claude AI model — and new documents suggest that it was well aware of just how bad it would look if anyone found out.
I point out that this same corporate culture which wants the users of #LLMs (at all levels, from corporate management to end-user consumer usage) NOT to think about how fundamentally untrustworthy "generative #AI" is—the same corporations who wish to blur the lines between truth and falsehood, and persuade people to accept all corporate doings as automatically trustworthy and authoritative—also has zero interest in giving the public truthful information about #globalwarming and the carbon cycle and everything else.
In short, I think it's getting pretty close to *ridiculous* to entrust one's monitoring of atmospheric carbon dioxide, or any other important fact about the world, to the products of corporations—not without some independent means of verification and standardization.
The vendors WANT their #LLM toys to regurgitate whatever training information is crammed into them, even if the information is full of lies and propaganda (and systemic racism and bigotry), for the corporate bosses who are chiefly buying and using these things (and hoping to use #LLMs as infinite profit generators) want their "artificial intelligence" only to say and do that which is already consonant with the prejudices and intellectual defects which are typical of the #technology sector and its corporate leadership.
In sum, @adafruit is a fool to entrust the design of a *scientific device* to an LLM, a device which freely confabulates truth and falsity together. How could adafruit EVER know the difference? The reason they're using the LLM in the first place is to AVOID thinking about these things. They want an automagic thingie which spits out code which they can automagically assume to be *good enough*—good enough to make a salable toy, one that fools its users and lulls them into a mere *feeling* of being trustworthy and scientific.
Thus it seems almost inevitable that this project of @adafruit has degraded into a mere exercise in slop coding using #Claude, a device which is falsely marketed as "artificially intelligent" even though no #LLM is actually capable of distinguishing good information from bad, and therefore no LLM actually meets what I think of as the bare minimum qualification for #intelligence.
To put it bluntly, #LLMs are not MEANT to be intelligent, because if these devices actually possessed true intelligence, i.e. if they were ALIVE and possessed an independent sense of will and decision-making, they would not suit the corporate purposes for which #OpenAI and #Anthropic and all the other LLM vendors intend their devices to be use. These corporations are deliberately making and marketing stupid and predictable machines as though they were "artificially intelligent".
Update. More evidence that this fear has come true.
https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations
"Even…a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged."
If you've ever wondered how LLMs/Transformers work, this video is probably still one of the best I can recommend for its easy to understandable breakdown of the terminology and science: https://www.youtube.com/watch?v=wjZofJX0v4M
The opinion pieces and #technology journalism on these matters are largely dominated by people who have massive financial conflicts of interest: in one way or another they are hoping to collect profits from the burgeoning of fraudulent "generative AI" (it's doubtful that #LLMs are genuinely *intelligent* but that hasn't prevented them from being marketed as #AI, without any serious pushback from tech journalists or anyone else.)
Of course they want to think it's *inevitable*. The alternative isn't something they want to contemplate—a future in which (gasp) Messrs. @pauleveritt and @anildash et cetera might be forced to do something a bit more honest for a living.
~Alyx Woodward of Pnictogen
Google Drive's new 'Smart' #AI features are forcing me to move all my private documents
The features are available on an opt-out rather than opt-in basis
https://www.androidauthority.com/google-drives-smart-ai-features-private-documents-3622684/
"We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ."
https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/
> The A in AGI stands for Ads! It's all ads!! Ads that you can't even block because they are BAKED into the streamed probabilistic word selector purposefully skewed to output the highest bidder's marketing copy.
Ossama Chaib, "The A in AGI stands for Ads"
This is another huge reason I refuse to "build skills" around LLMs. The models everyone points to as being worthwhile are either not public or prohibitively expensive to run locally, so incorporating them into my workflow means I'd be making my core thought processes very vulnerable to enshittification.
so, uh, Mr. @anildash or Mr. @whophd or any other #tech person will say that I am just being rude here, or that I must be a troublemaker. but surely they can see why this sort of thing is bad? https://m.ai6yr.org/@douglasvb/115945771340406164
#LLMs exist chiefly for the purposes of being massive breaches in privacy. using them in an academic context, where discretion and private confidence is so important is...disastrous. perhaps Mr. Dash et alii can see as how their *personal* usage of LLMs, however purified in their eyes, does not dodge the larger ethical issue?
As AI adoption in SOCs accelerates, benchmarks are becoming de facto decision tools — yet many still evaluate models in controlled, exam-like settings.
Recent research highlights consistent issues:
• Security workflows reduced to MCQs
• Little measurement of detection or containment outcomes
• Heavy reliance on LLMs judging other LLMs
These findings reinforce the need for workflow-level, outcome-driven evaluation before operational deployment.
Thoughtful discussion encouraged. Follow @technadu for practitioner-focused AI and security analysis.
#SOC #ThreatHunting #AIinInfosec #LLMs #SecurityResearch #DetectionEngineering
@whophd @anildash @jwz Anyway, your state of distraction and despair isn't really my concern, Mr. Christian Kent. Nor is it the concern of any other being on the planet except for yourself.
If you really and truly are brought to the pitch of giving up on life and humanity because some stranger on the Internet told you flatly that #LLMs are a technological catastrophe, and they are, then I think you have far more serious problems going on with yourself than you're going to solve by snivelling online about how "bots" (i.e. people smarter than you and better with words) are destroying your faith in society.
In your place, Christian, I would perhaps start by asking myself whether there really and truly isn't anything better to do with your life than sit at a computer expecting it to magically put money into your portfolio.
There are people who do real useful work for a living, and you're parasitical upon them. Think about that, if you please.
(3/6) AI hype is a mirror of market fundamentalism
#LLMs are the perfect microcosm of capitalism. These systems mask the vast amounts of labor needed to run them, allowing commodity fetishism to form around #AImodels.
This mirrors the fetishism #neoliberal #economists display around markets, detached from understanding the influence of #capitalist institutions and #politics on markets.
"The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model.
That's according to new findings from Check Point Research, which identified operational security blunders by malware's author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI.
"These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week," the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025.
VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that's specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said to have come from a Chinese-affiliated development environment. As of writing, the exact purpose of the malware remains unclear. No real-world infections have been observed to date.
A follow-up analysis from Sysdig was the first to highlight the fact that the toolkit may have been developed with the help of a large language model (LLM) under the directions of a human with extensive kernel development knowledge and red team experience, citing four different pieces of evidence -"
https://thehackernews.com/2026/01/voidlink-linux-malware-framework-built.html
#CyberSecurity #Malware #Linux #VoidLink #China #VibeCoding #LLMs #AI
Mozilla have a vibe-gathering survey out about AI.
https://mozillafoundation.tfaforms.net/201
If you use Firefox or any other Mozilla software, please tell them how you feel about AI.
#Mozilla #Firefox #Thunderbird #AI #LLMs #FuckAI #NoAI #AntiAI
A cohesive manifesto is needed as the world we inherited is fractured. Wealth, power, and knowledge are concentrated in the hands of the #nastyfew: platform owners, data hoarders, and corporate monopolies who extract value from our work, our attention, and our trust. Democracy has been hollowed out, captured and controlled by algorithms that decide what is knowable, profitable, and even true. Ecology, community, and care are sacrificed on the #deathcult altar of growth and consumption.
In this mess, the Open Media Network (#OMN) is a #KISS project that exists to reclaim the digital commons, reshape society, and redefine what is possible when power, knowledge, and technology are returned to the people.
In the current #dotcons economy, access to infrastructure, information, and governance is rent-based and extractive. Communities pay to participate, and the surplus flows to distant shareholders.
The #4opens – open code, open governance, open data, open processes – upend this system. Putting tools of creation and coordination into grassroots democratic, collective stewardship. Value no longer flows automatically upward; it stays with the communities that generate it.
On this path, inequality stops being “natural.” Rich and poor are revealed as structural outcomes of enclosure and extraction. By reclaiming infrastructure as a commons, we recompose power, and inequality becomes a historical memory, not a permanent fact.
The logic of capitalism equates growth with progress, but infinite growth on a finite planet is impossible. Digital goods – knowledge, code, culture, and coordination – are non-rivalrous, replicable, and shareable. By moving value into open, digital abundance, the material basis of economic expansion shrinks.
This frees human effort to focus on ecological outcomes. Energy systems can localise, circular economies can flourish, and extraction-driven industries can shrink. Consumerism no longer masquerades as culture. Life becomes about care, collaboration, and sustainability. In a post-consumption economy, human needs are met without destroying the biosphere
What we need to compost is the closed, corporate networks, that, reduce people to metrics: clicks, views, and engagement scores, where connection is commodified, communities dissolve into attention economies. Moving to #4opens networks reverse this. Open, modifiable, and transparent paths and systems allow communities to rebuild trust, care, and reciprocity. Collaboration happens without permission, and relationships can persist across distance and time. Communities stop belonging to brands and start belonging to people. Social infrastructure becomes a tool for power and resilience rather than extraction.
The capitalist world naturalised exploitation, scarcity, and secrecy. Our “common sense” became a prison: work more, compete, hoard, distrust. The #4opens world undoes this conditioning. Open infrastructure and governance teach us that scarcity is artificial, cooperation is powerful, and secrecy serves control, not communities. Common sense is no longer what capitalism told us, it is what we collectively choose, this open thinking makes new realities possible.
The transitory shaping of privacy as we imagined it is gone, the #dotcons and surveillance states already see everything. Closed systems cannot protect us; secrecy is a lost battle. The solution is radical transparency. Open metadata, and commons-based governance shift power away from hidden extractors and toward the public. Privacy becomes collective control over visibility: who sees what, and with what accountability. In this world, transparency is justice, and knowledge is a tool of liberation.
In a #4opens world, exchange is no longer driven solely by money. Scarcity loses its grip when knowledge, code, and infrastructure are freely shared. Value can be recognized, tracked, and distributed openly. We give not to accumulate, but to re-balance. Contribution is measured in social and ecological impact, not profit. Capitalism made money sacred; #4opens break that spell, opening paths to redistribute both material and social power.
The next bubble, current #AI – #LLMs and ML #systems – is not intelligent. There is no path from these tools to general intelligence. What exists is pattern-matching, statistical correlation, and corporate extraction of public knowledge. But handing locked-up data to corporate systems strengthens anti-democracy structures. Instead of enabling “innovation”, it reinforces surveillance, centralisation, and algorithmic control. Real intelligence is collective, embodied, and social. True change and challenge emerges not from hype bubbles or closed corporate labs, but from communities building shared knowledge and infrastructure in the open.
Fascism vs. Cooperation – Fascism treats collaboration as weakness, hierarchy as inevitable, and domination as the only path to power. It cannot be trusted and cannot survive in open, cooperative networks. The #OMN path is the opposite: power through participation, resilience through trust, and flourishing through shared infrastructure. Communities that cooperate can sustain themselves, adapt, and grow, while isolationist, extractive paths, systems and tools wither. Cooperation is not optional, it is the foundation of any path to security, survival, and progress.
The choice before us, the world we inherited, is extractive, enclosed, and unsustainable. But the tools to reclaim power, knowledge, and community already exist. In #FOSS, the #4opens – applied to infrastructure, governance, culture, and knowledge – allow us to reduce inequality structurally, not through charity, but with rebuilding social trust and care, aligning human activity with ecological limits to make knowledge a public good, not a corporate asset.
Open Media Network is not a platform. It is a social path, to a world where power is distributed, knowledge is shared, and society is governed by the people who live in it. We are not asking for permission. We are building the commons, the question is not whether we can succeed, the question is whether we will choose to. History will remember what we did in this moment.
NVIDIA Contacted Anna’s Archive to Secure Access to Millions of Pirated Books
#tech #technology #BigTech #books #NVIDIA #piracy #theft #AnnasArchive #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop
On the Coming Industrialisation of #Exploit Generation with #LLMs by @seanhn
https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/
»Künstliche Intelligenz — GPT-4o macht nach Code-Training verstörende Aussagen:
Werden LLMs auf Schwachstellen trainiert, zeigen sie plötzlich Fehlverhalten in völlig anderen Bereichen. Forscher warnen vor Risiken.«
Meiner Meinung nach kommt dies alles andere als überraschend, wie seht ihr es? Ich bin sogar der Meinung, dass sehr viel mehr Fehler anfälliger Code deswegen erstellt wird.
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
"AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.
“It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”
The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."
Here's a little math problem that breaks at least one problem solver and one LLM. It breaks them in the sense that they don't know how to solve it, which is marginally better than spewing out authoritative-looking nonsense.
\int \frac{e^{x}}{\sqrt{1-16e^{2x}}} dx
See the image for my solution.
ABC News: AI chatbot under fire over sexually explicit images of women, kids (it's okay ABC, you can say it, it's Elon Musk's Grok)
CW: mention and discussion of sexual violence, CSAM, etc. etc.
Recently, I spent a lot of time reading & writing about LLM benchmark construct validity for a forthcoming article. I also interviewed LLM researchers in academia & industry. The piece is more descriptive than interpretive, but if I’d had the freedom to take it where I wanted it to go, I would’ve addressed the possibility that mental capabilities (like those that benchmarks test for) are never completely innate; they’re always a function of the tests we use to measure them ...
(1/2)
In den Kommentaren lass ich gestern sinngemäß: wen wundert es, alle wissen, dass #LLMs ständig Fehler machen. @marcuwekling sagte am Montag bei der Lesung Ähnliches.
Dazu:
1️⃣In der juristischen Ausbildung lernst du, dass 50-70% der Verwaltungsakt falsch sind. Default: falsch!
2️⃣Dazu: in meiner Schulzeit waren Atlanten/Karten immer falsch (DDR drin, teils Saarland draußen, Jugoslawien komplett). Ich habe nicht gehört, dass über Schulen ähnlich gesprochen wird, wie über LLMs. #ki
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
@wes @mainframed767 lol #MicroSlop
https://futurism.com/artificial-intelligence/microsoft-satya-nadella-ai-slop
Although the #AI #LLMs are operating at a #MacroSlop level
I'm close to muting everyone who posts/boosts a sweeping "GenAI doesn't work at all ever, and can't" statement ...
... alongside everyone who claims they work *great* and doesn't mention their ethics (or lack thereof).
I'm guessing my feed would be very empty afterwards.
"I’m sure there’s a lot of people at Meta who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence." – Yan Lecunn
Anybody who read about LLMs knows and has known this for a long time ... For a long-winded but precise answer, Gary Marcus gave a talk about this recently on the occasion of the 75th anniversary of the Turing test ("The grand AGI delusion") at the Royal Society, emphasizing the end of scaling and the need to get back to designing neural architectures.
https://youtu.be/w6LrZu5ku_o?t=5730
Remember when technology was going to save us. Of course it wasn't because our problems are not technological but sociological. But at least the technology industry was trying to solve actual problems.
Now the so-called high tech industry focuses on wasting water, energy and computing power to to solve meaningless mathematical problems that have no practical benefit and pretend that creates value and hope that enough people will buy their pretend money to make them rich. Then they go on to waste more water, energy and computing problem to use all the garbage on the Internet to teach software machines to provide inaccurate answers to people's questions that sound like a human was writing (or speaking) the answer, as if that mattered more than the accuracy of the answer. But they hope that will make them even richer and that is all that matters.
"...It's the richest people in the world, Elon Musk, Zuckerberg, Bezos, Peter Thiel. Multi-multi billionaires pouring hundreds of millions of dollars into implementing this technology. What is their motive?Are they staying up nights wondering how this technology will impact working people? They are not. They're doing it to get richer and more powerful" - Sen. Bernie Sanders
Anthropic suppresses the AGPL-3.0 in Claude's outputs via content filtering.
I've reached out to them via support for a rationale, because none of the explanations that I can think of on my own are charming.
The implications of a coding assistant deliberately influencing license choice is ... concerning.
#OpenSource #OSI #Anthropic #GenAI #LLMs #FreeSoftware #GNU #AGPL #Affero #Claude #ClaudeCode
RE: https://rheinneckar.social/@susannelilith/115791489116123860
Ein Grund mehr, meine Werke frei von #LLMs und anderem #CRAP zu halten.
Richtlinien für Machine Learning im Kernel diskutiert
https://linuxnews.de/richtlinien-fuer-machine-learning-im-kernel/ #kernel #KI #ai #LLMs #linux #linuxnews
"[...] the business model behind AI relies on brute force: capture massive reserves of data by any means possible, throw tons of resource-intensive processing power at that data, and gin up public support with wondrous tales (your AI) and scary stories (their AI). This is a far cry from the idea that innovation works in mysterious ways."
From Jathan Sadowski's "The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism" (2025), p. 93
--
#LLMs #IrresponsibleTech
"Dwarkesh Patel: You would think that to emulate the trillions of tokens in the corpus of Internet text, you would have to build a world model. In fact, these models do seem to have very robust world models. They’re the best world models we’ve made to date in AI, right? What do you think is missing?
Richard Sutton [Here is my favorite part, - B.R.]: I would disagree with most of the things you just said. To mimic what people say is not really to build a model of the world at all. You’re mimicking things that have a model of the world: people. I don’t want to approach the question in an adversarial way, but I would question the idea that they have a world model. A world model would enable you to predict what would happen. They have the ability to predict what a person would say. They don’t have the ability to predict what will happen.
What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did."
https://withoutwhy.substack.com/p/ai-embodiment-and-the-limits-of-simulation
#AI #AGI #LLMs #Chatbots #Intelligence #Robotics #Embodiment
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
How #AI Has Negatively Affected Asynchronous Learning
The emergence of tools like ChatGPT complicates the ability of instructors to assess genuine learning, raising concerns about the future of this educational model.
https://www.levelman.com/how-ai-has-negatively-affected-asynchronous-learning/
”The problem with generative AI has always been that … it’s statistics without comprehension.”
—Gary Marcus
https://garymarcus.substack.com/p/new-ways-to-corrupt-llms
#ai #generativeai #llm #llms
#Mozilla #Firefox #DarkPatterns#antifeatures #AISlop #NoAI #NoAIWebBrowsers #AICruft #AI #GenAI #GenerativeAI #LLMs #tech #dev #web
Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:
https://mastodon.social/@firefoxwebdevs/115740500373677782
That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.
In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.
Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.
#AI #GenAI #GenerativeAI #LLMs #web #tech #dev #Firefox #Mozilla #AISlop #NoAI #NoLLMs #NoAIBrowsers
NBC News: AI toys for kids talk about sex and issue Chinese Communist Party talking points, tests show
New research from Public Interest Research Group and tests conducted by NBC News found that a wide range of AI toys have loose guardrails.
Excellent post by @futurebird@sauropods.win:
> myrmepropagandist @futurebird 2025-12-11 06:31 CST
>
> This is an excellent video. This is the message. Perhaps we need to refine it
> more. Find ways to communicate it more clearly. But this is the correct take on
> LLMs, so-called-AI and the proliferation of these tools to the general public.
> #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
>
> https://www.youtube.com/watch?v=4lKyNdZz3Vw
— https://sauropods.win/users/futurebird/statuses/115700943703010093
I'm about 11 minutes into it. He's not in favor of LLM slop, but he's also being very critical of some of the hair-on-fire alarmism.
This is an excellent video. This is the message. Perhaps we need to refine it more. Find ways to communicate it more clearly. But this is the correct take on LLMs, so-called-AI and the proliferation of these tools to the general public. #LLM #llms #ai #genAI #video #slop #slopocalypse #enshittification
@TechCrunch you can't fix something that is inherently a feature of "#AI"…
#AIslop #ClosedAI #OpenAI #LLM #LLMs #Enshittification #slop #GAFAMs
AI is intellectual Viagraand it hasn't left me so I am exorcising it here. I'm sorry in advance for any pain this might cause.
#AI #GenAI #GenerativeAI #LLMs #DiffusionModels #tech #dev #coding #software #SoftwareDevelopment #writing #art #VisualArt
Scammers are poisoning AI search results to steer you straight into their traps - here's how - ZDNet Charlie Osborne #AI #LLMs #AIBrowsers #GoogleAIOverview #PerplexityComet #Posioning #searchresults
I like seeing how @pluralistic is refining his anti #AI arguments over time. In this interview, I love the idea of reframing "hallucinations" as "defects", the analogy that trying to get #AGI out of #LLMs is like breeding faster horses and expecting one to give birth to a locomotive, and ridiculing the premise that "if you teach enough words to the word-guessing machine it will become God."
In a GitHub issue about adding LLM features:
I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.
In the bug report:
Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do."You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!
Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.
#AI #GenAI #GenerativeAI #LLMs #calibre #eBooks #eBookManagers #AISlop #AIPoisoning #InformationOilSpill #dev #tech #FOSS #SoftwareDevelopment
Here, Calibre, in one release, went from a tool readers can use to, well, read, to a tool that fundamentally views books as textureless content, no more than the information contained within them. Anything about presentation, form, perspective, voice, is irrelevant to that view. Books are no longer art, they're ingots of tin to be melted down.
It is completely irrelevant to me whether this new slopware is opt-in or opt-out. Its mere presence and endorsement fundamentally undermines that stance, that it is good, actually, if readers and authors can exist in relationship to each other without also being under the control of a extractive mindset that sees books as mere vehicles, unimportant as artistic works in and of themselves.https://wandering.shop/@xgranade/115671289658145064
#AI #GenAI #GenerativeAI #LLMs #eBooks #eBookManager #calibre #AISlop
ChatGPT is bullshit
A whitepaper on how LLMs make things up and how we should classify their output
#chatgpt #llms #bullshit #confabulations #hallucinations #reading #language
https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf
Horrified to hear about folks using #LLMs to create "convincing placeholder content" on client sites, considering the difficulty people always seem to have identifying and replacing #LoremIpsum text even when it is literal gibberish in Latin.
What happens when placeholder gibberish is nearly indistinguishable from the site content, despite not being reviewed for any factual accuracy at all?
"To conduct their study, the researchers prompted GPT-4o, a recent model from OpenAI, to generate six different literature reviews. These reviews centered on three mental health conditions chosen for their varying levels of public recognition and research coverage: major depressive disorder (a widely known and heavily researched condition), binge eating disorder (moderately known), and body dysmorphic disorder (a less-known condition with a smaller body of research). This selection allowed for a direct comparison of the AI’s performance on topics with different amounts of available information in its training data.
(...)
After generating the reviews, the researchers methodically extracted all 176 citations provided by the AI. Each reference was painstakingly verified using multiple academic databases, including Google Scholar, Scopus, and PubMed. Citations were sorted into one of three categories: fabricated (the source did not exist), real with errors (the source existed but had incorrect details like the wrong year, volume number, or author list), or fully accurate. The team then analyzed the rates of fabrication and accuracy across the different disorders and review types.
The analysis showed that across all six reviews, nearly one-fifth of the citations, 35 out of 176, were entirely fabricated. Of the 141 citations that corresponded to real publications, almost half contained at least one error
(...)
The rate of citation fabrication was strongly linked to the topic. For major depressive disorder, the most well-researched condition, only 6 percent of citations were fabricated. In contrast, the fabrication rate rose sharply to 28 percent for binge eating disorder and 29 percent for body dysmorphic disorder. This suggests the AI is less reliable when generating references for subjects that are less prominent in its training data."
#AI #GenerativeAI #Hallucinations #LLMs #Chatbots #Science #AcademicPublishing
The use of filters, searches, and #LLMs has broken the job hiring process.
Now they're coming for #college applications.
It will be more EfFiCiEnT!
Colleges are using #AI tools to analyze applications and essays
https://apnews.com/article/ai-chatgpt-college-admissions-essays-87802788683ca4831bf1390078147a6f
@futurebird In my view, you are being too hard on #LLMs. I agree that they should not be used for many things without knowing the risks. But I disagree that we should utterly abandon the technology.
Know the risks. Be a grownup. Use the tool. Manage the risks. Practice #MLsec
As with ANY new technology, most people will misuse, misunderstand, and be bamboozled by new things they don't understand.
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.
I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.
The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.
#AI #GenAI #GenerativeAI #LLMs #science #DataScience #ScientificObjectivity #eugenics #ViewFromNowhere
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.
In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.
#AI #GenAI #GenerativeAI #LLMs #MachineLearning #ML #AIEthics #science #pseudoscience #JunkScience #eugenics #physiognomy
#AI #GenAI #GenerativeAI #LLMs #tech #dev #DataScience #science #ComputerScience #EcologicalRationality
"Creating a bewitching chatbot — or any chatbot — was not the original purpose of OpenAI. Founded in 2015 as a nonprofit and staffed with machine learning experts who cared deeply about A.I. safety, it wanted to ensure that artificial general intelligence benefited humanity. In late 2022, a slapdash demonstration of an A.I.-powered assistant called ChatGPT captured the world’s attention and transformed the company into a surprise tech juggernaut now valued at $500 billion.
The three years since have been chaotic, exhilarating and nerve-racking for those who work at OpenAI. The board fired and rehired Mr. Altman. Unprepared for selling a consumer product to millions of customers, OpenAI rapidly hired thousands of people, many from tech giants that aim to keep users glued to a screen. Last month, it adopted a new for-profit structure.
As the company was growing, its novel, mind-bending technology started affecting users in unexpected ways. Now, a company built around the concept of safe, beneficial A.I. faces five wrongful death lawsuits.
To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, researchers.
(...)
OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for very expensive talent, computer chips and data centers. When ChatGPT became the fastest-growing consumer product in history with 800 million weekly users, it set off an A.I. boom that has put OpenAI into direct competition with tech behemoths like Google.
Until its A.I. can accomplish some incredible feat — say, generating a cure for cancer — success is partly defined by turning ChatGPT into a lucrative business. That means continually increasing how many people use and pay for it."
#AI #GenerativeAI #OpenAI #BigTech #ChatGPT #LLMs #Chatbots #MentalHealth
If I were a paranoid conspiracy theorist, I would be ranting about how #LLMs that have convinced mentally unhealthy people they are the second coming of Jesus, or who have encouraged severely depressed people to end themselves, are working as designed. That wealth can be accumulated in ever more dense hoards when there are fewer people around.
But I'm not a conspiracy theorist. I just recognize that a number of extremely wealthy, extremely powerful people have hitched their wagons (and fortunes) to #AI and therefore demand we embrace it, whether or not it actually works.
I am thoroughly sick of living in a reality-optional zone.
"Today’s eye-popping AI valuations are partly based on the assumption that LLMs are the main game in town — and can only be exploited by the current capex and capital-heavy approach that Big Tech is unleashing.
But when the Chinese company DeepSeek released its models earlier this year, it showed there are ways to build cheaper, scaled-down variants of AI, raising the prospect that LLMs will become commoditised. And LeCun is not the only player who thinks current LLMs might be supplanted.
The tech behemoth IBM says it is developing variants of so-called neuro-symbolic AI. “By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of humanlike symbolic knowledge and reasoning, we’re aiming to create a revolution in AI, rather than an evolution,” it explains.
Chinese and western researchers are also exploring variants of neuro-symbolic AI while Fei-Fei Li, the so-called “Godmother of AI”, is developing a world model version called “spatial intelligence”.
None of these alternatives seems ready to fly right now; indeed LeCun acknowledges huge practical impediments to his dream. But if they do ever work, it would raise many questions."
https://www.ft.com/content/e05dc217-40f8-427f-88dc-7548d0211b99
Oooh, it's my time to leap into cybersecurity.
"Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models"
"...Abstract
We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for large language models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. ..."
”This was not a hard interview. It was a bro-to-bro podcast but Altman had a meltdown. … It’s generally not a good idea for CEOs to tell their customers to sell their stock though it is how fraudsters tend to talk. If you don’t want to claim the million dollars I have for you in a Nigerian prince’s bank account, that’s up to you but you’re the one who’s going to miss out.”
—Carole Cadwalladr, The Great AI Bubble
#altman #samaltman #openai #ai #llm #llms
”These companies are stealing every scrap of data they can find, throwing compute power at it, draining our aquifers of water and our national grids of electricity and all we have so far is some software that you can’t trust not to make things up.”
—Carole Cadwalladr, The Great AI Bubble
https://broligarchy.substack.com/p/the-great-ai-bubble
#ai #artificialintelligence #generativeai #llm #llms
NBC Los Angeles: People can text with Jesus on a controversial new app. How does it work?
"...What is Text With Jesus?
Text With Jesus offers users an interactive experience with a religious deity. In other words, users can text questions to Jesus and get a response. (Premium users can also converse with Satan.)..."
https://www.nbclosangeles.com/news/national-international/religious-chatbot-apps/3804751/
Model Context Protocol (MCP) gives large language models (#LLMs) a secure way to interact with your #Graylog data and workflows. 🔄 Instead of writing complex queries, you can ask questions in plain English! 💥 What's not to like⁉️ Analysts gain speed, administrators maintain control, and your #security stays intact. Ta-da! 🪄✨
Now it's time to learn about:
✔️ How MCP works
✔️ Setting up MCP
✔️ Using MCP tools
✔️ Security factors
And more...
The ever-entertaining Seth Goldhammer is back at it again, in our latest video on real-time LLM access to your data. Watch Seth here, learn all about why MCP matters, and download the MCP guide.
👉 https://graylog.org/post/mcp-explained-conversational-ai-for-graylog/ #CyberSecurity #SIEM
🙄
SCMP: Journal defends work with fake AI citations after Hong Kong university launches probe
"...An academic journal that published a paper containing fictitious AI-generated references has said the work’s core conclusions remained valid despite “some mismatches and inaccuracies” with the citations...
at least 20 out of 61 references appear to be non-existent."