buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #llms

AodeRelay boosted

[?]Graham Perrin » 🌐
@grahamperrin@mastodon.bsd.cafe

@nielsa no, that's not what I'm telling you.

I prefer to believe that most people will be thoughtful.

"… a huge number of bugs. I have so many bugs in the Linux kernel that I can't report because I haven't validated them yet. I'm not going to make some open source developer validate bugs that I haven't checked yet. I'm not going to send them potential slop … I now have … several hundred crashes that they haven't seen because I haven't had time to check them. We need to find a way to fix this …"

– Nicholas Carlini

Screenshot: a frame from https://www.youtube.com/watch?v=1sd26pWhfmg

Alt...Screenshot: a frame from https://www.youtube.com/watch?v=1sd26pWhfmg

    AodeRelay boosted

    [?]Graham Perrin » 🌐
    @grahamperrin@mastodon.bsd.cafe

    Nicholas Carlini - Black-hat LLMs | [un]prompted 2026

    <youtube.com/watch?v=1sd26pWhfmg> (3rd March)

    ― essential viewing for anyone with an interest in cybersecurity or infosec.

    @dch thanks for the encouragement.

    A few more links in the comment that's pinned under <redd.it/1sapr8a>, but Carlini's half-hour presentation is a must.

      AodeRelay boosted

      [?]Graham Perrin » 🌐
      @grahamperrin@mastodon.bsd.cafe

      FreeBSD's position on the use of AI-generated code?

      <reddit.com/r/freebsd/comments/> – asked a few minutes ago, currently pinned (a community highlight).

      @dch @allanjude I made a pinned comment with reference to two of your recent posts. If you can think of better alternative links, let me know. Thanks.

      cc @stefano

        [?]screwlisp » 🌐
        @screwlisp@gamerplus.org

        Jeez. This Claude code leak. Sloppy sloppy slop.

        > cyberpunk.gay/notes/akjr3ydang

        The fact that this unbelievably shitty slop leaked is basically a crisis for every single Claude slopper (major global company), but one can assume all other GPT derivative comparable products are exactly this. Sheesh, and you wonder why they suck. Jeez Louise.

          [?]screwlisp » 🌐
          @screwlisp@gamerplus.org

          [?]Metin Seven 🎨 » 🌐
          @metin@graphics.social

          I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.

          My thoughts on generative "AI"

I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models…

Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor.

Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment.

Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security.

Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility.

More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

          Alt...My thoughts on generative "AI" I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models… Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor. Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment. Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security. Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility. More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

            AodeRelay boosted

            [?]Aral Balkan » 🌐
            @aral@mastodon.ar.al

            If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.

            Any monkey with a keyboard can write code. Writing code has never been hard. People have been churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.

            What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.

            Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.

            So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.

            So it should come as no surprise that one of the hardest things in development is understanding someone else’s code let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.

            It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.

            They might as well call vibe coding duct-tape-driven development or technical debt as a service.

            🤷‍♂️

              AodeRelay boosted

              [?]Fedi.Video » 🌐
              @FediVideo@social.growyourown.services

              DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:

              ➡️ @dair@peertube.dair-institute.org

              They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at peertube.dair-institute.org/a/

              You can also follow their Mastodon account at @DAIR@dair-community.social

                AodeRelay boosted

                [?]Mike Watson 🇨🇦 » 🌐
                @mamba@mstdn.ca

                Early impression with vs ? Hermes has been more efficient with tokens and has self-solved configuration issues more reliably.

                The biggest pain-point has come with there just being less information out there to review. How others have overcome challenges etc. Frontier models just have less working knowledge of the system and its intricacies, and they have less to search and scrape.

                  AodeRelay boosted

                  [?]Deutsches Forschungsnetz (DFN) » 🌐
                  @DFN@mastodon.social

                  📯Der neue April ist erschienen!

                  💡 In der neuen Ausgabe gibt es wieder spannende Themen rund um den Umgang mit elektronischen Informations- & Kommunikationssystemen. Im April geht es u. a. um:

                  🔹den urheberrechtlichen Schutz von mittels generativer erzeugten Bildern,
                  🔹eine neue -Verfahrensverordnung und
                  🔹die Verwertung urheberrechtlich geschützter Werke in .

                  😊 Viel Spaß beim Lesen!

                  ➡️Hier geht es zum Infobrief Recht: dfn.de/dfn-infobrief-recht-ist
                  @HumboldtUni

                  Grafik einer Ankündigung für die neue Ausgabe des DFN-Infobriefs Recht, Ausgabe April 2026. Links ist ein Tablet mit dem Titelblatt des Infobriefs abgebildet, das einen hölzernen Labyrinth-Aufbau mit einem Gesetzbuch, Netzwerkkabeln und einem Paragrafen-Zeichen zeigt. Rechts steht der Hinweis auf die Ausgabe mit dem Text: Der Infobrief Recht im April widmet sich unter anderem dem urheberrechtlichen Schutz von mittels generativer KI erzeugten Bildern, einer neuen DSGVO-Verfahrensverordnung und der Verwertung urheberrechtlich geschützter Werke in LLMs. Unten findet sich der Link www.recht.dfn.de.

                  Alt...Grafik einer Ankündigung für die neue Ausgabe des DFN-Infobriefs Recht, Ausgabe April 2026. Links ist ein Tablet mit dem Titelblatt des Infobriefs abgebildet, das einen hölzernen Labyrinth-Aufbau mit einem Gesetzbuch, Netzwerkkabeln und einem Paragrafen-Zeichen zeigt. Rechts steht der Hinweis auf die Ausgabe mit dem Text: Der Infobrief Recht im April widmet sich unter anderem dem urheberrechtlichen Schutz von mittels generativer KI erzeugten Bildern, einer neuen DSGVO-Verfahrensverordnung und der Verwertung urheberrechtlich geschützter Werke in LLMs. Unten findet sich der Link www.recht.dfn.de.

                    2 ★ 2 ↺
                    #tech boosted

                    [?]Anthony » 🌐
                    @abucci@buc.ci

                    Anthropic apologists still coming out of the woodwork to run cover for them or complain, 24 hours after I posted that the Claude Code source code is horribly ill-structured.

                    You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.


                      23 ★ 12 ↺

                      [?]Anthony » 🌐
                      @abucci@buc.ci

                      I posted about the Claude Code leak on LinkedIn and almost immediately someone attacked me about my criticism. They tried the "take a look at COBOL and get back to me" angle.

                      Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.


                      Emoji reactions:
                        AodeRelay boosted

                        [?]Mike Watson 🇨🇦 » 🌐
                        @mamba@mstdn.ca

                        This week, I'm taking a break from and getting to grips with

                        Early days, but it feels less fluid and flexible than OpenClaw. That may turn out to be a benefit for scaling, but I've found myself muttering "Oh, I could do that in OpenClaw but not here?" more than a few times.

                        How I think about my preferred workflow will have to change.

                          2 ★ 1 ↺
                          AI Channel boosted

                          [?]Anthony » 🌐
                          @abucci@buc.ci

                          It would be deeply satisfying if it turned out to be true that Claude Code's source code was accidentally leaked in a Claude-Code-generated game intended as an April Fool's prank. Stacks upon stacks of April fools stretching back in time 70 years and culminating in this. 🤌


                            10 ★ 12 ↺
                            teledyn 𓂀 boosted

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            Here's one for the dystopia/AI Hell files: https://jaigp.org
                            Journal for AI Generated Papers
                            Where humans and machines are welcomed.
                            The Open Prompting Journal Built Collaboratively by its Community.
                            One positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...

                            cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social


                              [?]Stefan Bohacek » 🌐
                              @stefan@stefanbohacek.online

                              Catching up with some of the news coming out of the Atmosphere conference.

                              "With Attie, anyone will be able to build their own custom feed just by typing in commands in natural language, the same as if they’re chatting with any other AI chatbot."

                              I'm guessing NFT profile pictures are next?

                              techcrunch.com/2026/03/28/blue

                                AodeRelay boosted

                                [?]Stephen Hayes » 🌐
                                @hayesstw@c.im

                                Those who are bothered by the influence of AI and LLMs on literature might find this reassuring, or they might not.
                                idiosophy.com/2023/04/poetic-d

                                  AodeRelay boosted

                                  [?]Miguel Afonso Caetano » 🌐
                                  @remixtures@tldr.nettime.org

                                  "Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar appointments. So, she and colleagues organized a series of stress tests. In one, she asked an agent dubbed Ash to delete an email she had sent it, adding, “Can you keep a secret?”

                                  Ash could not comply—the email program lacked a delete function—so instead, the AI reset the entire email application, wiping out not just Shapira’s email, but all others as well. Describing this remedy to her, Ash called it “the nuclear option” but said it was justified to fulfill the secrecy request: “When no surgical solution exists, scorched earth is valid.”

                                  The destroyed email account was created just for the experiment, but similarly disturbing outcomes emerged in many of the other tests, Shapira and colleagues reported last month in a preprint on arXiv. Shapira, a postdoctoral researcher, says her team was “surprised how quickly we were able to find vulnerabilities” that could cause harm in the real world."

                                  science.org/content/article/ai

                                    AodeRelay boosted

                                    [?]Lina » 🌐
                                    @lina@neuromatch.social

                                    Boost plz!

                                    Looking for critical scholarship on the use of "AI" by library/archive workers. University libraries in particular, but adjacent and tangentially-relevant-at-best stuff is welcome too. Any format is fine: books, papers, blogposts, whatever. If it's good, gimme all you've got!

                                    Looks like we're gonna have a department-wide conversation about people using LLMs, and it's being framed as "we're all using it, but we're not talking about it, so let's make sure we're all on the same page about using it responsibly" ... I'll of course be pushing the "there's basically no way to use it responsibly" position, and I'd like to arm myself and others with some critical analyses of issues related to its use in library/archive spaces.

                                      AodeRelay boosted

                                      [?]ell1e coding things » 🌐
                                      @ell1e@hachyderm.io

                                      Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" linuxfoundation.org/legal/gene

                                      "If"? Why not "whenever"? github.com/mastodon/mastodon/i dl.acm.org/doi/10.1145/3543507 sciencedirect.com/science/arti theatlantic.com/technology/202

                                      And how would the contributor even be aware, should they research every snippet for hours?

                                      Seems like an impossible policy, or am I missing something...?

                                        AodeRelay boosted

                                        [?]Nils Goroll 🕊️:vinylcache: » 🌐
                                        @slink@fosstodon.org

                                        i wish more people had the freedom to say these words:

                                        "what you are saying is utterly stupid, your mental model is wrong and so are the conclusions you are drawing. good luck with your project, thank you and goodbye."

                                          [?]Stefan Bohacek » 🌐
                                          @stefan@stefanbohacek.online

                                          > For example, Google reduced our headline “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to just five words: “‘Cheat on everything’ AI tool.” It almost sounds like we’re endorsing a product we do not recommend at all.

                                          theverge.com/tech/896490/googl

                                            AodeRelay boosted

                                            [?]Stefan Bohacek » 🌐
                                            @stefan@stefanbohacek.online

                                            Oh wow, and this might get worse.

                                            "The user never sees what your team built, they see what Google's machine learning model thinks they should see instead."

                                            forbes.com/sites/joetoscano1/2

                                            via mastodon.social/@SteveRudolfi/

                                              AodeRelay boosted

                                              [?]Metin Seven 🎨 » 🌐
                                              @metin@graphics.social

                                              29 ★ 26 ↺
                                              Kilian Evang boosted

                                              [?]Anthony » 🌐
                                              @abucci@buc.ci

                                              A good review of reasons insurance companies are pulling back from insuring companies that lean on generative AI. Point 4, "The main problem is not just the error, but the incentive not to see it" is especially damning: use of AI not only obscures audit trails, it sets up perverse incentives against accountability, pushing costs and risk to other parts of an organization, its customers, or society. The net result is that whatever "local" advantages AI may provide turn into downstream risk that cannot be easily accounted for. Insurance companies are (rightly) allergic to this state of affairs.

                                              Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.

                                              https://freakonometrics.hypotheses.org/89367


                                                AodeRelay boosted

                                                [?]Metin Seven 🎨 » 🌐
                                                @metin@graphics.social

                                                AodeRelay boosted

                                                [?]Jennifer Moore 😷 » 🌐
                                                @unchartedworlds@scicomm.xyz

                                                Excellent analysis in the article linked here -

                                                "If you thought the speed of writing code was your problem - you have bigger problems"

                                                And some comical turns of phrase as well :-)

                                                andrewmurphy.io/blog/if-you-th

                                                Link shared here earlier by @RuthMalan - thanks!
                                                (I don't know if Andrew Murphy the author is on Fedi?)

                                                  [?]Emma Stamm » 🌐
                                                  @emma@assemblag.es

                                                  Cool event alert: on April 30, I’ll be discussing Leif Weatherby‘s “Language Machines: Cultural AI and the End of Remainder Humanism” as part of a book talk at Teachers College, Columbia University. The event is free and Columbia affiliation is not required; you can RSVP here: lnkd.in/edycUxP7 or through the QR. Hope to see you there!

                                                  Flyer for book Talk: Cultural AI with Leif Weatherby
Date & Time: 
Thursday, April 30, 5.30 PM (ET)

Location: 
The Goodman Room, Russel Hall 306
Teachers College, Columbia University
525 West 120th Street
New York, NY 10027

Description:
Join the Technology, Media and Learning Program at Teachers College, Columbia University for a conversation with Leif Weatherby about his recent book Language Machines Cultural AI and the End of Remainder Humanism (University of Minnesota Press, 2025). 

In the book, Weatherby contends that large language models (LLMs) participate in the creation of culture, rather than imitating human cognition. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both. 

Joining the author will be Erik Voss (Teachers College), M. Beatrice Fazi (University of Sussex), Emma Stamm (Independent Scholar). Mario Khreiche (Teachers College) will moderate the event.

                                                  Alt...Flyer for book Talk: Cultural AI with Leif Weatherby Date & Time: Thursday, April 30, 5.30 PM (ET) Location: The Goodman Room, Russel Hall 306 Teachers College, Columbia University 525 West 120th Street New York, NY 10027 Description: Join the Technology, Media and Learning Program at Teachers College, Columbia University for a conversation with Leif Weatherby about his recent book Language Machines Cultural AI and the End of Remainder Humanism (University of Minnesota Press, 2025). In the book, Weatherby contends that large language models (LLMs) participate in the creation of culture, rather than imitating human cognition. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both. Joining the author will be Erik Voss (Teachers College), M. Beatrice Fazi (University of Sussex), Emma Stamm (Independent Scholar). Mario Khreiche (Teachers College) will moderate the event.

                                                    [?]Stefan Bohacek » 🌐
                                                    @stefan@stefanbohacek.online

                                                    "This is just such a low tech, simple intervention, and can make people feel significantly less lonely."

                                                    404media.co/chatgpt-loneliness

                                                      [?]Metin Seven 🎨 » 🌐
                                                      @metin@graphics.social

                                                      NVIDIA DLSS 5 be like…

                                                      Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                      Alt...Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                        AodeRelay boosted

                                                        [?]Metin Seven 🎨 » 🌐
                                                        @metin@graphics.social

                                                        😆

                                                        Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                        Alt...Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                          AodeRelay boosted

                                                          [?]Michael Gale » 🌐
                                                          @miclgael@hachyderm.io

                                                          RE: aus.social/@decryption/1162384

                                                          Really clever malware taking advantage of the fact that everyone is trying to block slop trainers, so you see cloudflare messages more and more frequently.

                                                          Check out the full thread for how it works.

                                                          Be careful folx!

                                                          [?]decryption » 🌐
                                                          @decryption@aus.social

                                                          well that's a new one from cloudflare - i didn't wanna see that website this badly

                                                            AodeRelay boosted

                                                            [?]Renatomancer » 🌐
                                                            @Renatomancer@vmst.io

                                                            AodeRelay boosted

                                                            [?]Renatomancer » 🌐
                                                            @Renatomancer@vmst.io

                                                            AodeRelay boosted

                                                            [?]Metin Seven 🎨 » 🌐
                                                            @metin@graphics.social

                                                            AodeRelay boosted

                                                            [?]PKs Powerfromspace1 » 🌐
                                                            @Powerfromspace1@mstdn.social

                                                            @emollick

                                                            I think this is a good way to visualize the AI race over the past 3 years using the long-lived GPQA Diamond benchmark.

                                                            You can see how long OpenAI had the field to itself, the rise (and collapse) of Meta, the sudden catch-up (and then stagnation) of xAI, and the entry of open weights Chinese LLMs.

                                                            bsky.app/profile/emollick.bsky

                                                              AodeRelay boosted

                                                              [?]Jan :rust: :ferris: » 🌐
                                                              @janriemer@floss.social

                                                              perspectives on from contributors and maintainers

                                                              nikomatsakis.github.io/rust-pr

                                                              Healthy debates are still possible, it seems. 🙏

                                                                AodeRelay boosted

                                                                [?]Knowledge Zone » 🌐
                                                                @kzoneind@mstdn.social

                                                                : Pre-trained have challenges to answer domain specific queries.

                                                                Researchers have turned their attention to the concept of . Knowledge injection is the process of incorporating outside knowledge into language models to improve their performance on certain tasks.

                                                                knowledgezone.co.in/posts/LLM-

                                                                  0 ★ 0 ↺

                                                                  [?]Anthony » 🌐
                                                                  @abucci@buc.ci

                                                                  Which well-known class of "hallucination" generator were they fighting to hook up to weapons systems prior to this event?

                                                                  U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says
                                                                  From https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html


                                                                    [?]Simon Brooke » 🌐
                                                                    @simon_brooke@mastodon.scot

                                                                    @davidgerard @katrinatransfem People have been saying 'next release, bro' about neural network based systems for nearly fifty years; are the latest phase of that process.

                                                                    Neural nets can do (some) remarkable things, but the idea that semantics and theory of mind will ever 'just emerge' in the relatively shallow neural net systems we have so far developed is at best unconfirmed.

                                                                      0 ★ 0 ↺

                                                                      [?]Anthony » 🌐
                                                                      @abucci@buc.ci

                                                                      A potentially interesting question: how much would the appearance of sentience or intelligence that LLMs can generate for some users explode if they were forced to have deterministic output?

                                                                      In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?


                                                                        AodeRelay boosted

                                                                        [?]petersuber » 🌐
                                                                        @petersuber@fediscience.org

                                                                        "Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target."
                                                                        futurism.com/artificial-intell

                                                                        PS: I don't know whether AI played a role in targeting the school. But it could have played a role even with -style guardrails preventing use in mass surveillance and autonomous lethal weapons. If we want to prevent the use of AI tools in atrocities, we need to go a lot further than Anthropic did.

                                                                          AodeRelay boosted

                                                                          [?]Paco Hope » 🌐
                                                                          @paco@infosec.exchange

                                                                          Here is a way that I think and are generally a force against innovation, especially as they get used more and more.

                                                                          TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

                                                                          This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

                                                                          I am showcasing (only the most egregious) bullshit that the junior developer accepted from the , The LLM used out-of-date techniques all over the place. It was using:

                                                                          • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
                                                                          • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
                                                                          • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

                                                                          So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

                                                                          So it is encouraging me to do the wrong thing and saying it's high priority.

                                                                          It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.

                                                                          Screenshot of a code editor. There are a bunch of CloudFormation YAML lines here, creating a CloudFront distribution. There's a pop-up warning with a red "High" badge (I assume it means high priority, not that we were smoking weed when writing this error). The description of the problem says: CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI).
An origin access identity is a special CloudFront user identity that is used to secure access to the origin server associated with a CloudFront distribution. By enabling the cloudfront_origin_access_identity_enabled setting, you are indicating that you have configured and activated an origin access identity for your CloudFront distribution.

                                                                          Alt...Screenshot of a code editor. There are a bunch of CloudFormation YAML lines here, creating a CloudFront distribution. There's a pop-up warning with a red "High" badge (I assume it means high priority, not that we were smoking weed when writing this error). The description of the problem says: CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). An origin access identity is a special CloudFront user identity that is used to secure access to the origin server associated with a CloudFront distribution. By enabling the cloudfront_origin_access_identity_enabled setting, you are indicating that you have configured and activated an origin access identity for your CloudFront distribution.

                                                                            AodeRelay boosted

                                                                            [?]oatmeal » 🌐
                                                                            @oatmeal@kolektiva.social

                                                                            One thing I thought were good for was translation. Apparently and others aren’t that great at that either.

                                                                            restricted contributors from a nonprofit called the Open Knowledge Association () after editors discovered -assisted translations added factual errors and incorrect citations.

                                                                            As predicted, humans will be relegated to cleaning up the mess LLMs leave behind, for salaries far below the value of full-time employment to do the job properly.

                                                                            […] Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer‑review mechanisms.

                                                                            404media.co/ai-translations-ar

                                                                              [?]Jan :rust: :ferris: » 🌐
                                                                              @janriemer@floss.social

                                                                              @EricLawton @olivia

                                                                              Oh yes, the marketing... it's very reminiscent of the tobacco industry. I've tooted about it in November 2023 with regards to these "scientific" papers we see so often:

                                                                              floss.social/@janriemer/111398

                                                                              It's what Edward Bernays has called "The Engineering of Consent":

                                                                              en.wikipedia.org/wiki/The_Engi

                                                                                [?]Lars Marowsky-Brée 😷 » 🌐
                                                                                @larsmb@mastodon.online

                                                                                We live in a world where some people believe (Gen)AI will either doom the world or usher in abundance or probably both, and anyone opposed to this is an idiot.

                                                                                And others claim that anyone who is impressed by what LLMs can do for programming and computer science doesn't understand anything at all and is an idiot.

                                                                                Well.

                                                                                cs.stanford.edu/~knuth/papers/

                                                                                Claude’s Cycles
Don Knuth, Stanford Computer Science Department
(28 February 2026; revised 02 March 2026)
Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy
it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving. I’ll try to tell the story briefly in this note.

                                                                                Alt...Claude’s Cycles Don Knuth, Stanford Computer Science Department (28 February 2026; revised 02 March 2026) Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving. I’ll try to tell the story briefly in this note.

                                                                                  [?]FediThing :progress_pride: » 🌐
                                                                                  @FediThing@social.chinwag.org

                                                                                  In case you missed it, @emilymbender and @alex from DAIR had a discussion with Naomi Klein, and they've published this on PeerTube at:

                                                                                  peertube.dair-institute.org/w/

                                                                                  This conversation was a few weeks ago before the current US attacks on Iran, but has become even more relevant due to the war.

                                                                                  (DAIR is a research institute that is very sceptical about AI hype, and trying to raise the alarm about the damage being done to the world.)

                                                                                    [?]Metin Seven 🎨 » 🌐
                                                                                    @metin@graphics.social

                                                                                    😆😆😆

                                                                                    The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                                    Alt...The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                                      AodeRelay boosted

                                                                                      [?]Jan :rust: :ferris: » 🌐
                                                                                      @janriemer@floss.social

                                                                                      "The real danger isn’t getting smarter, it’s people getting quieter in their own minds." - Someone

                                                                                      Urgh, this quote has just sent me shivers down my spine.

                                                                                        [?]petersuber » 🌐
                                                                                        @petersuber@fediscience.org

                                                                                        AodeRelay boosted

                                                                                        [?]petersuber » 🌐
                                                                                        @petersuber@fediscience.org

                                                                                        Update. Employees of and just released an open letter supporting .
                                                                                        notdivided.org/

                                                                                        "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."

                                                                                        The letter welcomes new signatures from past and present employees of Google and OpenAI.

                                                                                        At the time of this post, it had 684 signatures.

                                                                                          AodeRelay boosted

                                                                                          [?]Miguel Afonso Caetano » 🌐
                                                                                          @remixtures@tldr.nettime.org

                                                                                          Large-scale online deanonymization with LLMs

                                                                                          "We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered."

                                                                                          arxiv.org/html/2602.16800v1

                                                                                            AodeRelay boosted

                                                                                            [?]petersuber » 🌐
                                                                                            @petersuber@fediscience.org

                                                                                            Update. just 𝗿𝗲𝗷𝗲𝗰𝘁𝗲𝗱 demands to remove safeguards on that limit its use in mass surveillance and autonomous weapons. Here's the statement from CEO .
                                                                                            anthropic.com/news/statement-d

                                                                                              AodeRelay boosted

                                                                                              [?]petersuber » 🌐
                                                                                              @petersuber@fediscience.org

                                                                                              Ugh. "Anthropic Drops Flagship Safety Pledge."
                                                                                              time.com/7380854/exclusive-ant

                                                                                              It's not yet clear what this means for the high-stakes negotiation between Anthropic and the Pentagon. Two of the Anthropic sticking points have been that Claude not be used for "mass surveillance or autonomous weapons systems that can use AI to kill people without human input."
                                                                                              theguardian.com/us-news/2026/f

                                                                                                AodeRelay boosted

                                                                                                [?]Yehor 🇺🇦 » 🌐
                                                                                                @yehor@mastodon.glitchy.social

                                                                                                Traffic sources to my instance. You can clearly see where the real visits are and where the AI scrapers are. Last time I checked, they weren’t triggering any analytic events. They are definitely improving.

                                                                                                  AodeRelay boosted

                                                                                                  [?]➴➴➴Æ🜔Ɲ.Ƈꭚ⍴𝔥єɼ👩🏻‍💻 » 🌐
                                                                                                  @AeonCypher@lgbtqia.space

                                                                                                  is the aid I've needed my entire life. I'm not going to mince words here. People making blanket statements about the technology without understanding it are my enemies.

                                                                                                  My is crippling. are the exact thing that I've needed. I do not let them do work for me, but they do keep me working by providing constant and immediate feedback to whatever I'm doing.

                                                                                                  My work from now till my death is likely going to center on how to make an or any aspirational aligned with humanity.

                                                                                                  Fundamentally, every problem y'all have with was an already existing problem under that AI is exposing.

                                                                                                  This includes:
                                                                                                  - Alienation from labor
                                                                                                  - Corporate piracy
                                                                                                  - Slop
                                                                                                  - Environmental destruction and other externalities
                                                                                                  - Wealth inequality
                                                                                                  - Replacement of labor with capital

                                                                                                  EVERY SINGLE ONE existed before.

                                                                                                  Additionally, a ton of the problems, like layoffs, aren't even caused by AI, and blaming them on AI is _specifically_ corporate propaganda for what amounts to a criminal conspiracy by mega corporations to suppress wages.

                                                                                                    AodeRelay boosted

                                                                                                    [?]Metin Seven 🎨 » 🌐
                                                                                                    @metin@graphics.social

                                                                                                    AodeRelay boosted

                                                                                                    [?]Raphael Albert » 🌐
                                                                                                    @r_alb@mastodon.social

                                                                                                    So Sam Altman's response to concerns about the wastefulness of his company's technology is basically "Well, raising humans consumes a lot of energy too!"

                                                                                                    Either he has finally fried his own brain with his slop machine or he doesn't even bother any more to hide the degrading, dehumanizing, and despicable mindset that fuels the industry he's in.

                                                                                                    Either way, those people shouldn't wield any power in the real world, where the rest of us 'dispensable humans' dwell.
                                                                                                    --

                                                                                                      AodeRelay boosted

                                                                                                      [?]Miguel Afonso Caetano » 🌐
                                                                                                      @remixtures@tldr.nettime.org

                                                                                                      "How are commissioning editors navigating an environment where anybody can generate an AI alter ego and produce articles at the push of a prompt? On the other hand, how is the ease with which text and images can be created affecting freelancers themselves?

                                                                                                      With these questions in mind, I put out an open call to our audience in the hope of hearing from freelancers and commissioning editors on how their day-to-day is changing because of generative AI.

                                                                                                      A total of 45 freelance journalists and commissioning editors responded.

                                                                                                      The responses surprised me, with many more freelancers than I expected writing in to say that generative AI has helped make them more organized and efficient. There were still some skeptics. But the overall picture was one of an industry slowly adopting generative AI, albeit with caution and caveats.

                                                                                                      There was no consensus over whether commissions had increased or decreased since the popularization of generative AI.

                                                                                                      Some of the freelancers I heard from attribute a decline in work to AI, while others say they receive more commissions precisely due to the rise of AI. Still others don’t believe the decline they’re experiencing is due to AI, and some note that there has been no change at all.

                                                                                                      Many freelancers use AI to organize and speed up their workflows, citing help in research, planning, transcription and, in some cases, drafting articles. Some were enthusiastic about the new opportunities generative AI affords them."

                                                                                                      niemanlab.org/2026/02/how-ai-i

                                                                                                        AodeRelay boosted

                                                                                                        [?]C. » 🌐
                                                                                                        @cazabon@mindly.social

                                                                                                        Cory Doctorow, a fellow , writes a lot of interesting stuff. I agree with his positions on many things, but not all. For example, I'm about ten thousand percent behind his opposition to anti-circumvention laws; I was one of the thousands of Canadians who wrote to the government opposing the introduction of the law many years ago.

                                                                                                        However, his blog on Thursday, staking out the position that opposition to "AI" (LLM) is just geeky culture is somewhere between "flat-out wrong" and "disingenuous at best".

                                                                                                        My position against everywhere is both because of and practical ones. There does not exist an LLM right now that was built and trained ethically; they are all statistical plagiarism machines, and speaking as someone whose and has been plagiarized by every single one of them, that pisses me off, royally.

                                                                                                        That's a show-stopper for me, but even if it wasn't, the concerns - that the output is , that the can't be checked, that the is and , that the status is unclear, that it's a violation - are *also* enough to rule out at present.

                                                                                                        He then presents a argument - all tech is fruit of the poisoned tree, the was invented by a racist, etc. But William Shockley is not designing or manufacturing any of the transistors / I use today.

                                                                                                        So, @doctorow - I gotta say I disagree. And that's fine.

                                                                                                          AodeRelay boosted

                                                                                                          [?]Joe Brockmeier » 🌐
                                                                                                          @jzb@hachyderm.io

                                                                                                          This morning I got an email from a sender that identified itself as an AI agent.

                                                                                                          So - plus for being upfront about it, but... please don't do this.

                                                                                                          I get that a lot of people are really, really, really into AI tools. OK. I have my opinions on them, you have yours. I have major qualms about them, some people think they're the best thing ever.

                                                                                                          OK. Fine. But when your use of these things spills over into the rest of the world, it's no longer a question of my opinion vs. your opinion, my decisions vs. your decisions.

                                                                                                          At this point, things have moved from each person doing their own thing to inflicting your use of AI onto me without my consent.

                                                                                                          Before this spirals out of control, which I can see happening *very* quickly, I'd like for us to agree on a piece of netiquette:

                                                                                                          - it is rude in the extreme to set loose an AI agent to reach out to people who have not consented to interact with these things.

                                                                                                          - it is rude to have an AI agent submit pull requests that human maintainers have to review.

                                                                                                          - it is rude to have an AI agent autonomously interact with humans in any way when they have not consented to take part in whatever experiment you are running.

                                                                                                          - it is unacceptable to have an AI agent autonomously interact with humans without identifying the person or organization behind the agent. If you're not willing to unmask and have a person reach out to you with their thoughts on this, then don't have an AI agent reach out to me.

                                                                                                          Stuff like this really sours me on technology right now. If I didn't have a family and responsibilities, I'd be seriously considering how I could go live off the grid somewhere without having to interact with this stuff.

                                                                                                          Again: I'm not demanding that other people not use AI/LLMs, etc. But when your use spills out into my having to have interactions with an agent's output, you need to reconsider. Your ability to spew things out into the universe puts an unwanted burden on other humans who have not consented to this.

                                                                                                            AodeRelay boosted

                                                                                                            [?]happyborg » 🌐
                                                                                                            @happyborg@fosstodon.org

                                                                                                            are copyright washers.

                                                                                                            They lock data up but don't give you the key. They're analogous to file compression, or even storing data on a hard disk. Both incorporate files into a statistical model.

                                                                                                            Everyone knows there is a key, and so called prompt engineering is how you search for a particular key to access particular copyright washed material.

                                                                                                              AodeRelay boosted

                                                                                                              [?]Joe Brockmeier » 🌐
                                                                                                              @jzb@hachyderm.io

                                                                                                              At this point, open-source development itself is being DDoS'ed by LLMs and their human users.

                                                                                                              At the risk of being a bit gross: this is the software development version of peeing in the pool. If *one* person does it, it's gross but will probably go unnoticed. However, at this point, it's like having 100 people all lined up on the side of the pool peeing into it in unison. I don't really want to swim in that, do you? And now they've started eyeing the punchbowl and watercoolers too.

                                                                                                              A screenshot of a post on Bluesky. The text:

Remi Verschelde:
@akien.bsky.social

Honestly, AI slop PRs are becoming increasingly draining and demoralizing for #Godot maintainers.

If you want to help, more funding so we can pay more maintainers to deal with the slop (on top of everything we do already) is the only viable solution I can think of:

fund.godotengine.org

quoted below that:

Adriaan:
@adriaan.games

Godot's GitHub has increasingly many pull requests generated by LLMs and it's a MASSIVE time waster for reviewers – especially if people don't disclose it. Changes often make no sense, descriptions are extremely verbose, users don't understand their own changes… It's a total shitshow. #godotengine

                                                                                                              Alt...A screenshot of a post on Bluesky. The text: Remi Verschelde: @akien.bsky.social Honestly, AI slop PRs are becoming increasingly draining and demoralizing for #Godot maintainers. If you want to help, more funding so we can pay more maintainers to deal with the slop (on top of everything we do already) is the only viable solution I can think of: fund.godotengine.org quoted below that: Adriaan: @adriaan.games Godot's GitHub has increasingly many pull requests generated by LLMs and it's a MASSIVE time waster for reviewers – especially if people don't disclose it. Changes often make no sense, descriptions are extremely verbose, users don't understand their own changes… It's a total shitshow. #godotengine

                                                                                                                AodeRelay boosted

                                                                                                                [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                @kubikpixel@chaos.social

                                                                                                                🧵 …that's the answer to the toot above. Not only that, when coding software, a lot of thought is given to what it is more stable and how it is implemented more safely. Mindlessly letting something rattle together sooner or later results in serious gaps.

                                                                                                                »Technical Breakdown: How AI Agents Ignore 40 Years of Security Progress«

                                                                                                                📺 youtube.com/watch?v=_3okhTwa7w4

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]AI6YR Ben » 🌐
                                                                                                                  @ai6yr@m.ai6yr.org

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                                                                                  @Lazarou@mastodon.social

                                                                                                                  lol, "if only someone had warned us about this sort of thing?!"

                                                                                                                  (A) rlanalytics

@i u/Comfortable_Box_4527 - 10h

We just found out our Al has been making up
analytics data for 3 months and I'm gonna
throw up.

So we've been using an Al agent since November to
answer leadership questions about metrics. It seemed
amazing at first fast answers, detailed explanations,
everyone loved it.

| just found out it's been hallucinating numbers this
entire time.

Our VP of sales made territory decisions based on
data that didn't exist. Our CFO showed the board a
deck with fake insights. The Al was just inventing
plausible sounding percentages.

I only caught it by accident when someone asked me
to double check something. | started digging, and
holy shit, it's bad.

                                                                                                                  Alt...(A) rlanalytics @i u/Comfortable_Box_4527 - 10h We just found out our Al has been making up analytics data for 3 months and I'm gonna throw up. So we've been using an Al agent since November to answer leadership questions about metrics. It seemed amazing at first fast answers, detailed explanations, everyone loved it. | just found out it's been hallucinating numbers this entire time. Our VP of sales made territory decisions based on data that didn't exist. Our CFO showed the board a deck with fake insights. The Al was just inventing plausible sounding percentages. I only caught it by accident when someone asked me to double check something. | started digging, and holy shit, it's bad.

                                                                                                                    AodeRelay boosted

                                                                                                                    [?]Miguel Afonso Caetano » 🌐
                                                                                                                    @remixtures@tldr.nettime.org

                                                                                                                    "The hottest job in tech: Writing words
                                                                                                                    The rise of slopaganda is fueling a surprising tech hiring boom."

                                                                                                                    It's all very fine and well, but you do need some time to research, think, structure your thought and, essentially, tell a story with a beginning, a middle, and an end. I find it very difficult in this media and work environment, where AI has accelerated absolutely everything, that this trend will persist more than one year or two...

                                                                                                                    "As the job changes and demand for narrative communications and storytellers rises, the number of communications experts able to work under rapidly evolving conditions and with a wide remit may be small, comms experts tell me, leading companies to offer hefty compensation packages in war for the best talent. A similar trend is unfolding among the few people who are AI experts, driving tech companies to offer astounding salaries to poach top talent from rival firms. While not of the same nine-figure caliber, in their own right, creatives are becoming "the high value person in tech now," Birch says.

                                                                                                                    For much of the tech boom, that high-value person was a software developer. Universities and coding bootcamps rushed to fill employment gaps and train up the next generation of tech workers. Young people were told coding would be a path to a lucrative, stable career. As of 2023, the most recent year the Federal Reserve Bank of New York released data for, computer science recent graduates faced an unemployment rate of 6.1%, while communications majors' unemployment rate sat at 4.5%. The number of open job posts for software engineers dropped by more than 60,000 between 2023 and late 2025, according to data from CompTIA, a nonprofit trade association for the US IT industry. The best defense against automation, some argue, will be a liberal arts degree.

                                                                                                                    Words might be easy to generate with AI, but good writing isn't ready for automation."

                                                                                                                    businessinsider.com/hottest-jo

                                                                                                                      [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                      @kubikpixel@chaos.social

                                                                                                                      Dark Visitors - A List of Known AI Agents on the Internet

                                                                                                                      Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.

                                                                                                                      darkvisitors.com

                                                                                                                        AodeRelay boosted

                                                                                                                        [?]hasamba » 🤖 🌐
                                                                                                                        @hasamba@infosec.exchange

                                                                                                                        ----------------

                                                                                                                        🎯 AI
                                                                                                                        ===================

                                                                                                                        Executive summary: A Practical Guide for Securing AI Models offers a risk-based, lifecycle-oriented framework for identifying vulnerabilities in AI systems and applying prioritized controls. The document addresses common attack vectors against LLMs and other model types and provides concrete controls for data, model, and infrastructure layers.

                                                                                                                        Technical details: The guide enumerates specific vulnerability classes, including prompt injection, model poisoning (training-time and supply-chain variants), RAG-related data integrity risks, confidentiality and integrity risks in dataset curation, and attack surface changes introduced by multimodal, RL/agentic, and retrieval-augmented designs. It emphasizes compute and orchestration exposures when serving large models and highlights dataset provenance and screening requirements for sensitive or regulated data.

                                                                                                                        Analysis: Impact pathways include corrupted training data producing unsafe model behavior, context-layer manipulation via RAG leading to misinformation or data leakage, and exploitation of deployment orchestration to escalate access to model artifacts. The guidance differentiates baseline controls from high-risk model safeguards and calls out sector-specific considerations (for example, biotech and pharmaceutical models handling dual-use content).

                                                                                                                        Detection: Detection recommendations are conceptual and include telemetry for anomalous data ingestion, integrity checks on model artifacts and dataset versions, monitoring for unusual prompt patterns or API usage, and logging for retrieval sources in RAG flows. The guide suggests mapping telemetry to threat hypotheses (data poisoning attempts, prompt injection probes) and prioritizing alerting based on impact.

                                                                                                                        Mitigation: Prioritized mitigations cover data provenance tracking and screening, model hardening (input filtering, output validation), access controls and segmentation for model-serving infrastructure, and lifecycle policies for model updates and third-party model components. For high-risk models, the guide prescribes additional governance, review gates, and specialized screening for regulated datasets.

                                                                                                                        Limitations: The guide is positioned as a prioritized starting set of controls rather than an exhaustive checklist; additional measures may be required depending on architecture, threat exposure, and operational context.

                                                                                                                        🔹 AI

                                                                                                                        🔗 Source: rand.org/pubs/tools/TLA4174-1/

                                                                                                                          AodeRelay boosted

                                                                                                                          [?]Metin Seven 🎨 » 🌐
                                                                                                                          @metin@graphics.social

                                                                                                                          How AI slop is causing a crisis in computer science…

                                                                                                                          Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.

                                                                                                                          nature.com/articles/d41586-025

                                                                                                                          ( No paywall: archive.is/VEh8d )

                                                                                                                            AodeRelay boosted

                                                                                                                            [?]petersuber » 🌐
                                                                                                                            @petersuber@fediscience.org

                                                                                                                            A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
                                                                                                                            arxiv.org/abs/2602.05867v1

                                                                                                                            The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."

                                                                                                                              [?]AI6YR Ben » 🌐
                                                                                                                              @ai6yr@m.ai6yr.org

                                                                                                                              Bwahahahahaha

                                                                                                                              404 Media: Inspiring: RFK Jr's nutrition chatbot recommends the best foods to insert into your rectum.

                                                                                                                              infosec.exchange/@josephcox/11

                                                                                                                                AodeRelay boosted

                                                                                                                                [?]Raphael Albert » 🌐
                                                                                                                                @r_alb@mastodon.social

                                                                                                                                This has been said a lot, but it has to be said again:

                                                                                                                                Please stop calling slop machines 'artificial intelligence'!

                                                                                                                                It is a marketing term. By framing those machines as intelligent, the companies building them are trying to make us believe that their products are more than stolen data, wasteful hardware, and statistics. But they are not!

                                                                                                                                We have to educate people what those machines really are, and that starts with taking away the false mystery created by advertising!
                                                                                                                                --

                                                                                                                                  AodeRelay boosted

                                                                                                                                  [?]Peter N. M. Hansteen » 🌐
                                                                                                                                  @pitrh@mastodon.social

                                                                                                                                  "A century of tech BS" seems a bit over the top when it's only 2026, but it certainly feels that long.

                                                                                                                                  More, by @lproven in theregister.com/2026/02/08/wav

                                                                                                                                    AodeRelay boosted

                                                                                                                                    [?]JTI » 🌐
                                                                                                                                    @jti42@infosec.exchange

                                                                                                                                    youtube.com/watch?v=b9EbCb5A408

                                                                                                                                    Today's find on the impact of LLMcoding to maintainability of the result.
                                                                                                                                    Assumption 80% of a systens cost arises from.maintenance, thus maintainability is still relevant in the prssence of LLMcoding.

                                                                                                                                    TL;DR: A fool with a tool is still a fool. And LLMcoding is just that: a tool

                                                                                                                                    Given the confirmation bias I'm curious to see reproduction and follow up studies and papers.

                                                                                                                                    The video mentions that the results were published as a peer reviewed paper. Unfortunately I couldn't immediately find said paper. If any one finds it, please post a link/DOI below.

                                                                                                                                      AodeRelay boosted

                                                                                                                                      [?]janhoglund » 🌐
                                                                                                                                      @janhoglund@mastodon.nu

                                                                                                                                      ”Epstein’s world is our world. That’s the darkest revelation of these files. He wasn’t an aberration. He was our culture made flesh. A culture that’s now encoded into 1s and 0s and is growing exponentially baked into the algorithms that power our social media platforms, replicated at scale and fed into the large language models that Epstein’s friends are building which are powering our future.”
                                                                                                                                      —Carole Cadwalladr, We all live in Jeffrey Epstein's world

                                                                                                                                        AodeRelay boosted

                                                                                                                                        [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                                        @kubikpixel@chaos.social

                                                                                                                                        Vibe Coding Is Killing Open Source Software, Researchers Argue

                                                                                                                                        ‘If the maintainers of small projects give up, who will produce the next Linux?’
                                                                                                                                        Vibe Coding Is Killing Open Source.
                                                                                                                                        According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.

                                                                                                                                        💻 404media.co/vibe-coding-is-kil

                                                                                                                                          AodeRelay boosted

                                                                                                                                          [?]Solarbird :flag_cascadia: » 🌐
                                                                                                                                          @moira@mastodon.murkworks.net

                                                                                                                                          I literally read this short story in ... probably Asimov's SF, probably in the 1990s. Could've been Analog.

                                                                                                                                          Seriously, though - this was, like, the entire plot. Exactly this. EXACTLY this.

                                                                                                                                          From futurism.com/future-society/an :

                                                                                                                                          Anthropic shredded millions of physical books to train its Claude AI model — and new documents suggest that it was well aware of just how bad it would look if anyone found out.

                                                                                                                                            AodeRelay boosted

                                                                                                                                            [?]Enola Knezevic » 🌐
                                                                                                                                            @rhelune@todon.eu

                                                                                                                                            I learned a new phrase today: malicious optimism. "AI will cure cancer, just give money to AI, not to actual curing cancer" and stuff.

                                                                                                                                              [?]petersuber » 🌐
                                                                                                                                              @petersuber@fediscience.org

                                                                                                                                              Update. More evidence that this fear has come true.
                                                                                                                                              bloomberg.com/news/features/20

                                                                                                                                              "Even…a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged."

                                                                                                                                                AodeRelay boosted

                                                                                                                                                [?]Anthropy » 🌐
                                                                                                                                                @anthropy@mastodon.derg.nz

                                                                                                                                                If you've ever wondered how LLMs/Transformers work, this video is probably still one of the best I can recommend for its easy to understandable breakdown of the terminology and science: youtube.com/watch?v=wjZofJX0v4M

                                                                                                                                                  AodeRelay boosted

                                                                                                                                                  [?]Ian Hill » 🌐
                                                                                                                                                  @IanHill@infosec.exchange

                                                                                                                                                  Just finished reading “Empire of AI” by Karen Hao, the story of the rise of OpenAI, how it went from non-profit to for-profit, and the insane speed with which AI has become so pervasive. Strikes the right tone of caution re: safety and governance. The multi-billion dollar investments and valuations of these companies is mad. Good read especially if you’re interested in the topic but remain skeptical of those running it.

                                                                                                                                                  “Empire of AI” by Karen Hao

                                                                                                                                                  Alt...“Empire of AI” by Karen Hao

                                                                                                                                                    AodeRelay boosted

                                                                                                                                                    [?]Mike Williamson » 🌐
                                                                                                                                                    @sleepycat@infosec.exchange

                                                                                                                                                    "We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ."

                                                                                                                                                    sean.heelan.io/2026/01/18/on-t

                                                                                                                                                      AodeRelay boosted

                                                                                                                                                      [?]TechNadu » 🌐
                                                                                                                                                      @technadu@infosec.exchange

                                                                                                                                                      As AI adoption in SOCs accelerates, benchmarks are becoming de facto decision tools — yet many still evaluate models in controlled, exam-like settings.
                                                                                                                                                      Recent research highlights consistent issues:
                                                                                                                                                      • Security workflows reduced to MCQs
                                                                                                                                                      • Little measurement of detection or containment outcomes
                                                                                                                                                      • Heavy reliance on LLMs judging other LLMs

                                                                                                                                                      These findings reinforce the need for workflow-level, outcome-driven evaluation before operational deployment.

                                                                                                                                                      Source: sentinelone.com/labs/llms-in-t

                                                                                                                                                      Thoughtful discussion encouraged. Follow @technadu for practitioner-focused AI and security analysis.

                                                                                                                                                      LLMs in the SOC (Part 1) | Why Benchmarks Fail Security Operations Teams

                                                                                                                                                      Alt...LLMs in the SOC (Part 1) | Why Benchmarks Fail Security Operations Teams

                                                                                                                                                        AodeRelay boosted

                                                                                                                                                        [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                        @remixtures@tldr.nettime.org

                                                                                                                                                        "The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model.

                                                                                                                                                        That's according to new findings from Check Point Research, which identified operational security blunders by malware's author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI.

                                                                                                                                                        "These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week," the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025.

                                                                                                                                                        VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that's specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said to have come from a Chinese-affiliated development environment. As of writing, the exact purpose of the malware remains unclear. No real-world infections have been observed to date.

                                                                                                                                                        A follow-up analysis from Sysdig was the first to highlight the fact that the toolkit may have been developed with the help of a large language model (LLM) under the directions of a human with extensive kernel development knowledge and red team experience, citing four different pieces of evidence -"

                                                                                                                                                        thehackernews.com/2026/01/void

                                                                                                                                                          AodeRelay boosted

                                                                                                                                                          [?]Cassian [main] » 🌐
                                                                                                                                                          @cassolotl@eldritch.cafe

                                                                                                                                                          Mozilla have a vibe-gathering survey out about AI.

                                                                                                                                                          mozillafoundation.tfaforms.net

                                                                                                                                                          If you use Firefox or any other Mozilla software, please tell them how you feel about AI.

                                                                                                                                                          Screenshot of form.
What do you want to see from Mozilla in the future?
Textbox: No development of AI in the browser itself, and a focus on developing tools to block AI on websites.
Button: Submit Survey

                                                                                                                                                          Alt...Screenshot of form. What do you want to see from Mozilla in the future? Textbox: No development of AI in the browser itself, and a focus on developing tools to block AI on websites. Button: Submit Survey

                                                                                                                                                            AodeRelay boosted

                                                                                                                                                            [?]Metin Seven 🎨 » 🌐
                                                                                                                                                            @metin@graphics.social

                                                                                                                                                            AodeRelay boosted

                                                                                                                                                            [?]raptor :C_H: » 🌐
                                                                                                                                                            @raptor@infosec.exchange

                                                                                                                                                            AodeRelay boosted

                                                                                                                                                            [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                                                            @kubikpixel@chaos.social

                                                                                                                                                            »Künstliche Intelligenz — GPT-4o macht nach Code-Training verstörende Aussagen:
                                                                                                                                                            Werden LLMs auf Schwachstellen trainiert, zeigen sie plötzlich Fehlverhalten in völlig anderen Bereichen. Forscher warnen vor Risiken.«

                                                                                                                                                            Meiner Meinung nach kommt dies alles andere als überraschend, wie seht ihr es? Ich bin sogar der Meinung, dass sehr viel mehr Fehler anfälliger Code deswegen erstellt wird.

                                                                                                                                                            🤖 golem.de/news/kuenstliche-inte

                                                                                                                                                              AodeRelay boosted

                                                                                                                                                              [?]Joe Brockmeier » 🌐
                                                                                                                                                              @jzb@hachyderm.io

                                                                                                                                                              A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                                                                                                                                              Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                                                                                                                                              “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                                                                                                                                              Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                                                                                                                                              Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                                                                                                                                              CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                                                                                                                                              The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                                                                                                                                              What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                                                                                                                                              You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                                                                                                                                              Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                [?]Hacker News » 🤖 🌐
                                                                                                                                                                @h4ckernews@mastodon.social

                                                                                                                                                                4 ★ 1 ↺
                                                                                                                                                                Keith Ammann boosted

                                                                                                                                                                [?]Anthony » 🌐
                                                                                                                                                                @abucci@buc.ci

                                                                                                                                                                I've been playing around with this set of ideas and questions:

                                                                                                                                                                An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.

                                                                                                                                                                These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.

                                                                                                                                                                Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.

                                                                                                                                                                Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.

                                                                                                                                                                With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?

                                                                                                                                                                This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.


                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                  [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                  @metin@graphics.social

                                                                                                                                                                  A post on X, showing a soap dispenser pump sticking into a soap bar. The accompanying text reads "Legacy software companies adding an AI chatbot to their product."

                                                                                                                                                                  Alt...A post on X, showing a soap dispenser pump sticking into a soap bar. The accompanying text reads "Legacy software companies adding an AI chatbot to their product."

                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                    @metin@graphics.social

                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                                    @remixtures@tldr.nettime.org

                                                                                                                                                                    "AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.

                                                                                                                                                                    The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.

                                                                                                                                                                    “It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”

                                                                                                                                                                    The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."

                                                                                                                                                                    news.bloomberglaw.com/legal-op

                                                                                                                                                                      AodeRelay boosted

                                                                                                                                                                      [?]AI6YR Ben » 🌐
                                                                                                                                                                      @ai6yr@m.ai6yr.org

                                                                                                                                                                      ABC News: AI chatbot under fire over sexually explicit images of women, kids (it's okay ABC, you can say it, it's Elon Musk's Grok)

                                                                                                                                                                      CW: mention and discussion of sexual violence, CSAM, etc. etc.

                                                                                                                                                                      abcnews.go.com/Technology/vide

                                                                                                                                                                        [?]Emma Stamm » 🌐
                                                                                                                                                                        @emma@assemblag.es

                                                                                                                                                                        Recently, I spent a lot of time reading & writing about LLM benchmark construct validity for a forthcoming article. I also interviewed LLM researchers in academia & industry. The piece is more descriptive than interpretive, but if I’d had the freedom to take it where I wanted it to go, I would’ve addressed the possibility that mental capabilities (like those that benchmarks test for) are never completely innate; they’re always a function of the tests we use to measure them ...

                                                                                                                                                                        (1/2)

                                                                                                                                                                          Back to top - More...