buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #generativeai

[?]Metin Seven 🎨 » 🌐
@metin@graphics.social

I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.

My thoughts on generative "AI"

I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models…

Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor.

Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment.

Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security.

Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility.

More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

Alt...My thoughts on generative "AI" I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models… Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor. Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment. Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security. Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility. More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

    2 ★ 2 ↺
    #tech boosted

    [?]Anthony » 🌐
    @abucci@buc.ci

    Anthropic apologists still coming out of the woodwork to run cover for them or complain, 24 hours after I posted that the Claude Code source code is horribly ill-structured.

    You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.


      23 ★ 12 ↺

      [?]Anthony » 🌐
      @abucci@buc.ci

      I posted about the Claude Code leak on LinkedIn and almost immediately someone attacked me about my criticism. They tried the "take a look at COBOL and get back to me" angle.

      Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.


      Emoji reactions:
        2 ★ 1 ↺
        AI Channel boosted

        [?]Anthony » 🌐
        @abucci@buc.ci

        It would be deeply satisfying if it turned out to be true that Claude Code's source code was accidentally leaked in a Claude-Code-generated game intended as an April Fool's prank. Stacks upon stacks of April fools stretching back in time 70 years and culminating in this. 🤌


          10 ★ 12 ↺
          teledyn 𓂀 boosted

          [?]Anthony » 🌐
          @abucci@buc.ci

          Here's one for the dystopia/AI Hell files: https://jaigp.org
          Journal for AI Generated Papers
          Where humans and machines are welcomed.
          The Open Prompting Journal Built Collaboratively by its Community.
          One positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...

          cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social


            5 ★ 3 ↺
            #tech boosted

            [?]Anthony » 🌐
            @abucci@buc.ci

            Microsoft Copilot putting ads in pull requests on Microsoft Github is expected behavior.


              AodeRelay boosted

              [?]David Crispin » 🌐
              @david_crispin@mstdn.ca

              What happens when I'm bored while my youngest plays at playgroup... #TimHortons #generativeAI

              My youngest daughter's pink build a bear serving coffee at Tim Hortons.

              Alt...My youngest daughter's pink build a bear serving coffee at Tim Hortons.

              My youngest pink build a bear sipping on a coffee at Tim Hortons.

              Alt...My youngest pink build a bear sipping on a coffee at Tim Hortons.

              2 ★ 5 ↺
              Cowboy Who? boosted

              [?]Anthony » 🌐
              @abucci@buc.ci

              The "correction", when it comes, is going to be ugly. The quantities of misallocated capital involved in this AI mania look to be even more staggering than previously reported.

              I worry that few smaller companies or startups will survive, and the country will be pockmocked with half-constructed data centers and obsolete equipment. This is not like the dot-com crash that left useful and unused fiber optic networking.

              https://www.wsj.com/finance/investing/private-credits-exposure-to-ailing-software-industry-is-bigger-than-advertised-d80da378

              hashtag hashtag hashtag hashtag hashtag hashtag hashtag hashtag

                AodeRelay boosted

                [?]Simon Brooke » 🌐
                @simon_brooke@mastodon.scot

                "In 2018, more than 4,000 Google employees signed a letter opposing the company’s contract to build artificial intelligence for the Pentagon’s targeting systems. Workers organised a walk out. Engineers quit. And Google ultimately abandoned the contract."

                Worker organisation has power.




                theguardian.com/news/2026/mar/

                  AodeRelay boosted

                  [?]ell1e coding things » 🌐
                  @ell1e@hachyderm.io

                  Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" linuxfoundation.org/legal/gene

                  "If"? Why not "whenever"? github.com/mastodon/mastodon/i dl.acm.org/doi/10.1145/3543507 sciencedirect.com/science/arti theatlantic.com/technology/202

                  And how would the contributor even be aware, should they research every snippet for hours?

                  Seems like an impossible policy, or am I missing something...?

                    AodeRelay boosted

                    [?]Michael Westergaard » 🌐
                    @michael@westergaard.social

                    If #generativeAI is so great, why are companies selling access to it? If it can solve all the problems, why do #AI companies not use it to win on the stock exchanges or to write all the code or develop the blockbuster medicines?

                      3 ★ 0 ↺

                      [?]Anthony » 🌐
                      @abucci@buc.ci

                      I think it is meaningful that Marvin Minsky, sometimes called the "father of AI", seemed to hold human beings in low regard.

                      Here's John Searle in 1983:

                      Marvin Minsky of MIT says that the next generation of computers will be so intelligent that we will ‘be lucky if they are willing to keep us around the house as household pets.'
                      Here's Joseph Weizenbaum in 2007:
                      Professor Marvin Minsky of MIT, once pronounced—a belief he still holds—that ‘‘the brain is merely a meat machine.’’
                      He goes on to note that meat is dead and might be eaten or thrown out. Flesh is what's alive. He also draws attention to the word "merely", as in "nothing more than".

                      I share with Weizenbaum the belief that Minsky has clearly expressed a disdain for human intelligence. We're on the order of household pets. Our brains are no more than food or trash. Obviously Minsky doesn't speak for all AI researchers then or since, but his "meat machine" language is all over the place, and this disdain or even contempt for human intelligence and achievement is also common.

                      It definitely doesn't speak to a curiosity about intelligence, which I think requires at least a little bit of love and esteem.


                        0 ★ 1 ↺
                        AI Channel boosted

                        [?]Anthony » 🌐
                        @abucci@buc.ci

                        Re-reading The Soul Gained and Lost: Artificial Intelligence as a Philosophical Project by Phil Agre as catharsis. Here.


                          AodeRelay boosted

                          [?]Bob Carver » 🌐
                          @cybersecboardrm@infosec.exchange

                          AodeRelay boosted

                          [?]Metin Seven 🎨 » 🌐
                          @metin@graphics.social

                          29 ★ 26 ↺
                          Kilian Evang boosted

                          [?]Anthony » 🌐
                          @abucci@buc.ci

                          A good review of reasons insurance companies are pulling back from insuring companies that lean on generative AI. Point 4, "The main problem is not just the error, but the incentive not to see it" is especially damning: use of AI not only obscures audit trails, it sets up perverse incentives against accountability, pushing costs and risk to other parts of an organization, its customers, or society. The net result is that whatever "local" advantages AI may provide turn into downstream risk that cannot be easily accounted for. Insurance companies are (rightly) allergic to this state of affairs.

                          Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.

                          https://freakonometrics.hypotheses.org/89367


                            AodeRelay boosted

                            [?]Metin Seven 🎨 » 🌐
                            @metin@graphics.social

                            2 ★ 0 ↺

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            Workers who love ‘synergizing paradigms’ might be bad at their jobs
                            Employees who are impressed by vague corporate-speak like “synergistic leadership,” or “growth-hacking paradigms” may struggle with practical decision-making, a new Cornell study reveals.
                            From https://news.cornell.edu/stories/2026/03/workers-who-love-synergizing-paradigms-might-be-bad-their-jobs

                            I tried reading this article replacing variations of "corporate" with "LLM" and it works. Right down to the "LLM Bullshit Receptivity Scale (LBSR)".


                              [?]Metin Seven 🎨 » 🌐
                              @metin@graphics.social

                              NVIDIA DLSS 5 be like…

                              Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                              Alt...Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                AodeRelay boosted

                                [?]Metin Seven 🎨 » 🌐
                                @metin@graphics.social

                                😆

                                Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                Alt...Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                  1 ★ 1 ↺
                                  AI Channel boosted

                                  [?]Anthony » 🌐
                                  @abucci@buc.ci

                                  No matter how esoteric AI literature has become, and no matter how thoroughly the intellectual origins of AI's technical methods have been forgotten, the technical work of AI has nonetheless been engaged in an effort to domesticate the Cartesian soul into a technical order in which it does not belong. The problem is not that the individual operations of Cartesian reason cannot be mechanized (they can be) but that the role assigned to the soul in the larger architecture of cognition is untenable. This incompatibility has shown itself up in a pervasive and ever more clear pattern of technical frustrations. The difficulty can be shoved into one area or another through programmers' choices about architectures and representation schemes, but it cannot be made to go away.
                                  From Phil Agre's 1995 article The Soul Gained And Lost.

                                  If one were to continue the genealogy in this article from 1995 to present, one would find many of the same issues inherent in Cartesian dualism present in large language models. Like the STRIPS system Agre surveys, LLMs also generate sequences. They also must make choices among many available options at each step of sequence generation. They also use heuristics to guide this process that would otherwise explode intractably. The heuristics, or what Agre dubs "determining tendency", are random number generators and "guardrails" in LLMs instead of the tree-structured search of previous-generation AI systems. But otherwise the systems are structured similarly.

                                  It's fascinating, but not coincidental, that the determining tendency of AI systems like these is so often perceived to have mystical or even God-like qualities. Breathless predictions about the endless potential of tree-structured search in early writing on GOFAI resembles modern proclamations of imminent AGI or superintelligence among generative AI boosters because both of these mechanisms---tree search or random number generation---are situated where the Cartesian soul would be. These mysterious determining tendencies, homunculuses of last resort, or souls are timeless, acausal factors that choose a single path from an infinite space of possibilities, and thereby direct the encompassing agent's behavior in an intelligent manner.

                                  This is one reason why I posted the other day that if you removed the random number generation from LLMs, the illusion of their intelligence would more than likely quickly evaporate. You'd be excising their soul, leaving behind a zombie!


                                    AodeRelay boosted

                                    [?]C. » 🌐
                                    @cazabon@mindly.social

                                    I would like to thank the nascent "AI" industry for their significant contributions to all manner of artistic and creative endeavours in today's society: writing, coding, art, music, and everything else. [1]

                                    Because they have single-handedly created entire new markets for all of these things - new categories such as "writing with guaranteed no AI", "coding with guaranteed no AI", "art with guaranteed no AI", "music with guaranteed no AI", etc. Without them, these whole classes of creative output would simply not exist.

                                    [1] They are also innovating in the world of financial and investor fraud, but I'm not considering those areas in this post.

                                      AodeRelay boosted

                                      [?]Metin Seven 🎨 » 🌐
                                      @metin@graphics.social

                                      AodeRelay boosted

                                      [?]Miguel Afonso Caetano » 🌐
                                      @remixtures@tldr.nettime.org

                                      "Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named Laura whose privacy had been violated.

                                      “Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.

                                      Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).

                                      And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.

                                      What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

                                      And it must be stopped."

                                      nytimes.com/2026/03/13/opinion

                                        [?]heise online » 🌐
                                        @heiseonline@social.heise.de

                                        heise+ | Präsentationen mit KI-Videos: Komplexe Inhalte effizient vermitteln

                                        KI-Videos lockern Präsentationen auf und sichern Ihnen die Aufmerksamkeit des Publikums. Wir beschreiben, wie Sie mit KI effizient Erklärfilme erstellen.

                                        heise.de/ratgeber/Praesentatio

                                        4 ★ 3 ↺

                                        [?]Anthony » 🌐
                                        @abucci@buc.ci

                                        AI is closely aligned with power, and I've found a lot of people in tech have a problematic relationship with the notion of power. Tech has been an ascendant sector for awhile and is currently fusing with state power, which I think would naturally lead to people in this sector having internal conflict over what exactly they do. Also I think people with controlling personalities are drawn to the seemingly "clear", black-and-white nature of tech, where "truth" can be discerned with less ambiguity than in other areas of life. I don't get the sense from interactions I've had with people in tech that there is much introspection or circumspection about their relationship with various forms of power. Not to the extant that folks routinely grapple deeply with the power dynamics at play within this sector and its relation to the rest of society, anyway. Obviously I'm painting with broad strokes here, and I don't mean to downplay the acts of those who do grapple meaningfully with power. I'm just talking about a tendency I've observed over my own career.

                                        I think some of the strange shifts we're seeing in high-profile folks in tech who already had authoritarian impulses---which, let's be real, is uncomfortably common among tech workers---is that they are groping for ways to embrace taking power that do not run afoul of other values they've endorsed. This really can't be done unless the person was already pretty antisocial, so we see weird behavior such as running self-serving "surveys" about AI with foregone conclusions, microaggressions and dissembling, attacks and other forms of hostility, being "one shotted" or conflating a computer program with humanness, etc. It's really a general problem, in that view, given how the US regime has shifted away from social democracy/liberalism into a much more brash, violent, and authoritarian stance. There are a variety of ways to cope with such a shift, one being to embrace it while bursting into a cloud of internal contradictions.


                                          4 ★ 0 ↺

                                          [?]Anthony » 🌐
                                          @abucci@buc.ci

                                          “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” -- Sam Altman

                                          From the BlackRock Infrastructure Summit

                                          Putting aside that this is asinine, as is typical of Sam Altman, who wants this future?

                                          To make the capitalism work here would require creating an artificial scarcity of intelligence. That immediately implies that education and publishing are both targets of this industry. Public education and public libraries would be likely casualties.

                                          This also fits the general "enclosure of the commons" narrative that capitalist entities seem to follow. General intelligence is a commons the wealthy wish to enclose, gatekeep, and rent back to us in a degraded state.


                                            [?]Rachel Levieva » 🌐
                                            @levieva@infosec.exchange

                                            ​In about 30 years, it’ll be released—featuring an Indiana Jones fully reconstructed from 40 terabytes of archival footage.

                                            ​The plot is peak recursion: an AI generates code for Harrison Ford's digital twin to battle a rogue algorithm in virtual pixelated jungles. Pure digital cyberpunk wrapped in a "cassette futurism" and Cold War aesthetic:
                                            ​"Soviet scientists in a secret bunker under Magadan accidentally awakened an ancient Sumerian algorithm trapped within copper circuits. This proto-AI doesn't just want to conquer the world, it wants to rewrite history by purging everything chaotic and illogical... namely, the USSR."

                                            ​In the end, Indiana defeats the virus by simply pulling the plug. A digital silence falls over the world, while the 2056 audience pays for their tickets in crypto-yuan, fully aware that the film itself was created by the very AI the hero was fighting on screen.

                                              AodeRelay boosted

                                              [?]Brian Greenberg :verified: » 🌐
                                              @brian_greenberg@infosec.exchange

                                              The honeymoon phase of AI-driven productivity is meeting the harsh reality of system stability. Amazon has officially updated its internal policies to require senior engineers to sign-off for any code changes assisted by generative AI. This move follows a series of significant service disruptions—referred to internally as "high blast radius" incidents—where AI-generated code led to major product outages.

                                              For a company that values speed and a "you build it, you run it" culture, this is a massive shift. It turns out that while AI can write code in seconds, the cost of an error at AWS scale can be measured in hours of downtime and millions in lost revenue. We are seeing a necessary correction: AI is a powerful assistant, but it cannot yet be trusted with the keys to the kingdom without a seasoned human expert verifying the logic.

                                              🧠 Amazon now mandates senior review for all AI-assisted code deployments.
                                              ⚡ The policy change follows a spike in high-priority Sev2 incidents.
                                              🎓 Senior engineers must now act as the ultimate "bar raisers" for synthetic code.
                                              🔍 This internal friction highlights the hidden costs of AI-driven development.

                                              arstechnica.com/ai/2026/03/aft

                                                0 ★ 0 ↺

                                                [?]Anthony » 🌐
                                                @abucci@buc.ci

                                                Which well-known class of "hallucination" generator were they fighting to hook up to weapons systems prior to this event?

                                                U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says
                                                From https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html


                                                  1 ★ 0 ↺

                                                  [?]Anthony » 🌐
                                                  @abucci@buc.ci

                                                  Hi! I’d like to nominate myself for your list. I’m not nearly as high-profile as many of the folks that you mentioned or who are being recommended, but I am a practicing computer scientist trained in AI, and I have managed to go viral on LinkedIn with my criticism of AI a few times (haha). I post pretty regularly on the fediverse about AI, usually critically though sometimes "philosophically" or politically.

                                                  Some blog posts:

                                                  Some fediverse posts:
                                                  I usually hashtag AI posts with especially to help anyone who wants to mute this stuff.

                                                  CC: @timnitGebru@dair-community.social @emilymbender@dair-community.social @alexhanna@peertube.dair-institute.org @DAIR@dair-community.social @cwebber@social.coop @jaredwhite@indieweb.social @tante@tldr.nettime.org

                                                    0 ★ 0 ↺

                                                    [?]Anthony » 🌐
                                                    @abucci@buc.ci

                                                    A potentially interesting question: how much would the appearance of sentience or intelligence that LLMs can generate for some users explode if they were forced to have deterministic output?

                                                    In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?


                                                      0 ★ 0 ↺

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      I think we've reached a point, at least in STEM here in the US, where we should default to thinking of positive comments about AI by high-profile scientists and university professors as celebrity endorsements.


                                                        15 ★ 13 ↺

                                                        [?]Anthony » 🌐
                                                        @abucci@buc.ci

                                                        Something I've learned from Ruth Ben-Ghiat: aspiring authoritarians purposely engineer situations in which people are invited to give up their values and morals and make decisions that compromise their sense of right and wrong. Moral decay, moral injury, and subsequently moral collapse become so intolerable that afflicted people will blame anything else but their own choices or the leader they threw in with, which of course are the only proximate causes it would be helpful to implicate. The more compromising decisions they make, the more they are drawn into the authoritarian's orbit.

                                                        There is no question that it is indefensible to use generative AI systems as they are currently constituted, especially the commercial ones, once one becomes aware of how they are made and operated and the destructive consequences they have already had and will surely continue to have. Among the many reasons using these tools is indefensible is that they represent an authoritarian invitation. You're invited to trade your morals and ethics for a bit of convenience, a reduction in friction, a learning experience, a rhetorical flourish, or maybe (a kind of) status. You thereby align yourself more and more with people who say things like "water is fake" or "fuck earth" as they make the computer systems enabling the horrors we're watching unfold on social media. You start to tell yourself stories, complexifying stories that explain why it's OK you did this thing that you know is not OK. You move in the direction of people who are already telling themselves stories like this. Maybe their stories have superior analgesic qualities to yours.

                                                        Nobody needs to go down this path.


                                                          3 ★ 0 ↺

                                                          [?]Anthony » 🌐
                                                          @abucci@buc.ci

                                                          I realize this was a dodge and disposable comment on Altman's part, but like Musk's "Fuck Earth!", I think these sound bites that correlate with visible behavior meaningfully reflect the person and should be taken seriously.


                                                            0 ★ 0 ↺

                                                            [?]Anthony » 🌐
                                                            @abucci@buc.ci

                                                            "Water is totally fake."

                                                            -- Sam Altman, super genius


                                                              AodeRelay boosted

                                                              [?]Sean O'Brien » 🤖 🌐
                                                              @profdiggity@privacysafe.social

                                                              Hey @pluralistic you're one of the "editors" Grammarly is using for their "Expert Review" 🤢

                                                              I tested it and took some screenshots... just pop in anything about projects and it will suggest digital rights advocates, with you at the top.

                                                              Grammarly Expert Review Screenshot with Cory Doctorow listed

                                                              Alt...Grammarly Expert Review Screenshot with Cory Doctorow listed

                                                              Another Grammarly Expert Review Screenshot with Cory Doctorow listed

                                                              Alt...Another Grammarly Expert Review Screenshot with Cory Doctorow listed

                                                              Grammarly Expert Review Screenshot with comments naming Cory Doctorow

                                                              Alt...Grammarly Expert Review Screenshot with comments naming Cory Doctorow

                                                              Another Grammarly Expert Review Screenshot with comments naming Cory Doctorow

                                                              Alt...Another Grammarly Expert Review Screenshot with comments naming Cory Doctorow

                                                                AodeRelay boosted

                                                                [?]Miguel Afonso Caetano » 🌐
                                                                @remixtures@tldr.nettime.org

                                                                Karen Hao: "There’s a really dark history around attempts to quantify human intelligence. There’s basically never been any endeavor to quantify or rank human intelligence without some kind of insidious motivation behind it. So in general, yeah, this entire idea of recreating human intelligence is actually quite fraught. And also, one of the challenges that we’re facing now is, the AI industry has become so resource-rich that most of the AI researchers in the world now are bankrolled by the companies that are ultimately trying to just sell us their technologies.

                                                                And there has become this distortion in the fundamental science that is coming out of these researchers in terms of understanding the capabilities and limitations of AI today in the same way that you would imagine climate science would be deeply distorted if most climate scientists were bankrolled by the fossil fuel industry. You would just not get an accurate picture on the actual climate crisis.

                                                                And so, we are not actually getting an accurate picture on the capabilities of these systems and all of the different ways that they break down, because a lot of these companies now censor that kind of research or don’t even allow that research to be resourced. So there’s never any investigation along those lines."

                                                                motherjones.com/politics/2025/

                                                                  AodeRelay boosted

                                                                  [?]Miguel Afonso Caetano » 🌐
                                                                  @remixtures@tldr.nettime.org

                                                                  "Grammarly’s “expert review” feature offers to give users writing advice “inspired by” subject matter experts, including recently deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.

                                                                  The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”

                                                                  The feature, which launched in August, claims to help you “sharpen your message through the lens of industry-relevant perspectives.” When users select the “expert review” button in the Grammarly sidebar, it analyzes their writing and surfaces AI-generated suggestions “inspired by” related experts. Those “industry-relevant perspectives” include the likes of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among many others.

                                                                  The Verge found numerous other tech journalists named in the feature, as well, including former Verge editors Casey Newton and Joanna Stern, former Verge writer Monica Chin, Wired’s Lauren Goode, Bloomberg’s Mark Gurman and Jason Schreier, The New York Times’ Kashmir Hill, The Atlantic’s Kaitlyn Tiffany, PC Gamer’s Wes Fenlon, Gizmodo’s Raymond Wong, Digital Foundry founder Richard Leadbetter, Tom’s Guide editor-in-chief Mark Spoonauer, former Rock Paper Shotgun editor-in-chief Katharine Castle, and former IGN news director Kat Bailey. The descriptions for some experts contain inaccuracies, such as outdated job titles, which could have been accurately updated had Superhuman asked those people for permission to reference their work."

                                                                  theverge.com/ai-artificial-int

                                                                    AodeRelay boosted

                                                                    [?]Corey Hudson Dirrig » 🌐
                                                                    @iamcoreyinhd@universeodon.com

                                                                    The fight for AI leadership is about more than benchmarks.
                                                                    ▶️ 👉 youtu.be/rbCpe0DLiPo?si=gbj3XW

                                                                    In this episode of Utilizing AI, Stephen Foskett, Olivier Blanchard, and Brad Shimmin examine the growing rivalry between Anthropic and OpenAI, comparing Claude and ChatGPT and what their differences mean for enterprise AI adoption.

                                                                      1 ★ 1 ↺

                                                                      [?]Anthony » 🌐
                                                                      @abucci@buc.ci

                                                                      I think it must not dawn on living folks how, in a very real sense, overwhelming use of artificial intelligence would make the human world effectively dead. It is a necrotechnology.

                                                                      The fact it's rapidly made its way into warfare is not a coincidence nor a matter of economics. That's what this technology and its precursors have always been for. Economics provides a means for recruiting the entire population to produce it. Our economic activity is the means of creation of necrotechnology whose existence we then protest when it pushes beyond our comfort zone.


                                                                        [?]Metin Seven 🎨 » 🌐
                                                                        @metin@graphics.social

                                                                        😆😆😆

                                                                        The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                        Alt...The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                          0 ★ 0 ↺

                                                                          [?]Anthony » 🌐
                                                                          @abucci@buc.ci

                                                                          LLM proponents have forgotten their intellectual roots in TANSTAAFL.


                                                                            AodeRelay boosted

                                                                            [?]Ham on Wry » 🌐
                                                                            @HamonWry@mastodon.world

                                                                            No thanks Samsung.

                                                                            I don’t need a washer with WiFi because I don’t want ICE kicking in my door because I didn’t separate the whites from the colors.

                                                                              AodeRelay boosted

                                                                              [?]Miguel Afonso Caetano » 🌐
                                                                              @remixtures@tldr.nettime.org

                                                                              "AI tools are making potentially harmful errors in social work records, from bogus warnings of suicidal ideation to simple “gibberish”, frontline workers have said.

                                                                              Keir Starmer last year championed what he called “incredible” time-saving social work transcription technology. But research across 17 English and Scottish councils shared with the Guardian has now found AI-generated hallucinations are slipping in.

                                                                              As scores of local authorities begin to use AI note-takers to accelerate recording and summarisation of meetings with adult and child service users, a seven-month study by the Ada Lovelace Institute found “some potentially harmful misrepresentations of people’s experiences are occurring in official care records”.

                                                                              The independent thinktank found that one social worker who had used an AI transcription tool to create a summary said the technology had incorrectly “indicated that there was suicidal ideation”, but “at no point did the client actually … talk about suicidal ideation or planning, or anything”."

                                                                              theguardian.com/education/2026

                                                                                AodeRelay boosted

                                                                                [?]Miguel Afonso Caetano » 🌐
                                                                                @remixtures@tldr.nettime.org

                                                                                Large-scale online deanonymization with LLMs

                                                                                "We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered."

                                                                                arxiv.org/html/2602.16800v1

                                                                                  AodeRelay boosted

                                                                                  [?]Metin Seven 🎨 » 🌐
                                                                                  @metin@graphics.social

                                                                                  AodeRelay boosted

                                                                                  [?]Stefan Bohacek » 🌐
                                                                                  @stefan@stefanbohacek.online

                                                                                  Counterpoint to people saying that they need AI to be able to create art.

                                                                                  Via bsky.app/profile/smoothdunk2.b

                                                                                  Screenshot of the linked Bluesky post showing an artist sharing a simple stick figure comic.

The artist says in their post, with quotes inserted:

“i have to use AI because i’m bad at drawingggg 😭😭😭”

Alt text from the original post:

Three panel comic:

1st panel: Two crudely drawn stick men:
Stick man 1: to make art that is popular you have to be really good at drawing

2nd panel: 
stick man 2: but if that’s true

3rd panel: both characters are looking at a spot beneath the comic (where the likes are)
Stick man 2: why does this comic have thousands of likes?

                                                                                  Alt...Screenshot of the linked Bluesky post showing an artist sharing a simple stick figure comic. The artist says in their post, with quotes inserted: “i have to use AI because i’m bad at drawingggg 😭😭😭” Alt text from the original post: Three panel comic: 1st panel: Two crudely drawn stick men: Stick man 1: to make art that is popular you have to be really good at drawing 2nd panel: stick man 2: but if that’s true 3rd panel: both characters are looking at a spot beneath the comic (where the likes are) Stick man 2: why does this comic have thousands of likes?

                                                                                    AodeRelay boosted

                                                                                    [?]Miguel Afonso Caetano » 🌐
                                                                                    @remixtures@tldr.nettime.org

                                                                                    "How are commissioning editors navigating an environment where anybody can generate an AI alter ego and produce articles at the push of a prompt? On the other hand, how is the ease with which text and images can be created affecting freelancers themselves?

                                                                                    With these questions in mind, I put out an open call to our audience in the hope of hearing from freelancers and commissioning editors on how their day-to-day is changing because of generative AI.

                                                                                    A total of 45 freelance journalists and commissioning editors responded.

                                                                                    The responses surprised me, with many more freelancers than I expected writing in to say that generative AI has helped make them more organized and efficient. There were still some skeptics. But the overall picture was one of an industry slowly adopting generative AI, albeit with caution and caveats.

                                                                                    There was no consensus over whether commissions had increased or decreased since the popularization of generative AI.

                                                                                    Some of the freelancers I heard from attribute a decline in work to AI, while others say they receive more commissions precisely due to the rise of AI. Still others don’t believe the decline they’re experiencing is due to AI, and some note that there has been no change at all.

                                                                                    Many freelancers use AI to organize and speed up their workflows, citing help in research, planning, transcription and, in some cases, drafting articles. Some were enthusiastic about the new opportunities generative AI affords them."

                                                                                    niemanlab.org/2026/02/how-ai-i

                                                                                      AodeRelay boosted

                                                                                      [?]Ham on Wry » 🌐
                                                                                      @HamonWry@mastodon.world

                                                                                      Companies will always need human workers.

                                                                                      What else will companies like Amazon and Tesla blame when their ai has massive failures.

                                                                                        AodeRelay boosted

                                                                                        [?]trashHeap :hehim: :verified_gay: » 🌐
                                                                                        @trashheap@tech.lgbt

                                                                                        Whenever I read about a case of AI Psychosis wherein someone mistook a LLM chatbot for a self-aware entity, or folks talk about having emotional affairs, or relationships with LLM chatbots. I think about this still from the BBC TV mini series adaptation of The Hitchhiker's Guide to the Galaxy from 1981.

                                                                                        It's an ad from the Sirius Cybernetics Corporation.

                                                                                        A woman in a bathing suit and bikini is holding a beach-ball while playfully standing beside a grim looking steel colored robot on a beach. The text "Your Plastic Pal Who's Fun To Be With!" is displayed in cartoonish letters beneath.

                                                                                        Alt...A woman in a bathing suit and bikini is holding a beach-ball while playfully standing beside a grim looking steel colored robot on a beach. The text "Your Plastic Pal Who's Fun To Be With!" is displayed in cartoonish letters beneath.

                                                                                          AodeRelay boosted

                                                                                          [?]Church of Jeff » 🌐
                                                                                          @jeffowski@mastodon.world

                                                                                          jatin
@jatinkrmalik

The reason why RAM has become
four times more expensive is that a
huge amount of RAM that has not yet
been produced was purchased with
non-existent money to be installed
in GPUs that also have not yet been
produced, in order to place them in data
centers that have not yet been built,
powered by infrastructure that may
never appear, to satisfy demand that
does not actually exist and to obtain
profit that is mathematically impossible.

                                                                                          Alt...jatin @jatinkrmalik The reason why RAM has become four times more expensive is that a huge amount of RAM that has not yet been produced was purchased with non-existent money to be installed in GPUs that also have not yet been produced, in order to place them in data centers that have not yet been built, powered by infrastructure that may never appear, to satisfy demand that does not actually exist and to obtain profit that is mathematically impossible.

                                                                                            4 ★ 0 ↺

                                                                                            [?]Anthony » 🌐
                                                                                            @abucci@buc.ci

                                                                                            If only it were this easy in the world.


                                                                                            Github notification "clause has been successfully blocked"

                                                                                            Alt...Github notification "clause has been successfully blocked"

                                                                                              1 ★ 1 ↺

                                                                                              [?]Anthony » 🌐
                                                                                              @abucci@buc.ci

                                                                                              OpenClaw founder Steinberger joins OpenAI, open-source bot becomes foundation

                                                                                              From https://www.reuters.com/business/openclaw-founder-steinberger-joins-openai-open-source-bot-becomes-foundation-2026-02-15/

                                                                                              Everything I've read about OpenClaw suggests it's the NFT of AI. These folks need the fiction that AI is approaching "consciousness", or at least "agency", to continue.


                                                                                                0 ★ 1 ↺

                                                                                                [?]Anthony » 🌐
                                                                                                @abucci@buc.ci

                                                                                                I like this idea so much I sent a slightly edited and enhanced version of it to my blog to die: https://bucci.onl/notes/The-failscene


                                                                                                  30 ★ 21 ↺

                                                                                                  [?]Anthony » 🌐
                                                                                                  @abucci@buc.ci

                                                                                                  I wonder about using "failscene" to describe the current slate of AI tools and demos. In contrast with the demoscene, which is about getting very low powered computers to do cool things you wouldn't expect them to be able to do, the failscene is about getting very high powered computers to fail at doing boring things we already know how to do without them. Plus you can stylize it fAIlscene if you're inclined to.


                                                                                                    2 ★ 3 ↺

                                                                                                    [?]Anthony » 🌐
                                                                                                    @abucci@buc.ci

                                                                                                    We present the first representative international data on firm-level AI use. We survey almost 6000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia. We find four key facts. First, around 70% of firms actively use AI, particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use. Third, firms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity. Fourth, firms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. This contrast implies a sizable gap in expectations, with senior executives predicting reductions in employment from AI and employees predicting net job creation.
                                                                                                    From https://www.nber.org/papers/w34836


                                                                                                      AodeRelay boosted

                                                                                                      [?]Miguel Afonso Caetano » 🌐
                                                                                                      @remixtures@tldr.nettime.org

                                                                                                      "In simpler terms:

                                                                                                      - AI startups are all unprofitable, and do not appear to have a path to sustainability.

                                                                                                      - AI data centers are being built in anticipation of demand that doesn’t exist, and will only exist if AI startups — which are all unprofitable — can afford to pay them.

                                                                                                      - Oracle, which has committed to building 4.5GW of data centers, is burning cash every day that OpenAI takes to set up its GPUs, and when it starts making money, it does so from a starting position of billions and billions of dollars in debt.

                                                                                                      - Margins are low throughout the entire stack of AI data center operators — from landlords like Applied Digital to compute providers like CoreWeave — thanks to the billions in debt necessary to fund both construction and IT hardware to make them run, putting both parties in a hole that can only be filled with revenues that come from either hyperscalers or AI startups.

                                                                                                      - In a very real sense, the AI compute industry is dependent on AI “working out,” because if it doesn’t, every single one of these data centers will become a burning hole in the ground.

                                                                                                      I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.

                                                                                                      Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay."

                                                                                                      wheresyoured.at/data-center-cr

                                                                                                        AodeRelay boosted

                                                                                                        [?]Metin Seven 🎨 » 🌐
                                                                                                        @metin@graphics.social

                                                                                                        How AI slop is causing a crisis in computer science…

                                                                                                        Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.

                                                                                                        nature.com/articles/d41586-025

                                                                                                        ( No paywall: archive.is/VEh8d )

                                                                                                          2 ★ 4 ↺
                                                                                                          Anthony boosted

                                                                                                          [?]Anthony » 🌐
                                                                                                          @abucci@buc.ci

                                                                                                          Compare and contrast

                                                                                                          This:

                                                                                                          In the year of the city 2274, the remnants of human civilization live in a sealed city beneath a cluster of geodesic domes, a utopia run by computer. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Carrousel", a public ritual that destroys their bodies, under the pretense they would be "Renewed" or reborn.
                                                                                                          (Logans Run)

                                                                                                          and this:

                                                                                                          In the year of the city 2274, the colony of human beings on Mars live in a sealed city beneath a cluster of geodesic domes, a utopia run by generative AI. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Cloud", a public ritual that destroys their bodies, under the pretense their consciousness would be uploaded to a computer and live forever.

                                                                                                            1 ★ 0 ↺

                                                                                                            [?]Anthony » 🌐
                                                                                                            @abucci@buc.ci

                                                                                                            As a ratio with the general S&P 500, Microsoft is at the level it was before ChatGPT launched. Any relative advantage one might have had from long-term investment in Microsoft instead of an S&P 500 index has been erased.


                                                                                                              1 ★ 0 ↺

                                                                                                              [?]Anthony » 🌐
                                                                                                              @abucci@buc.ci

                                                                                                              A first essential condition on the cognitive is that cognitive states must involve intrinsic, non-derived content. Strings of symbols on the printed page mean what they do in virtue of conventional associations between them and words of language. Numerals of various sorts represent the numbers they do in virtue of social agreements and practices. The representational capacity of orthography is in this way derived from the representational capacities of cognitive agents. By contrast, the cognitive states in normal cognitive agents do not derive their meanings from conventions or social practices.
                                                                                                              Adams & Aizawa, The bounds of cognition


                                                                                                                4 ★ 1 ↺
                                                                                                                Tiota Sram boosted

                                                                                                                [?]Anthony » 🌐
                                                                                                                @abucci@buc.ci

                                                                                                                Google and Microsoft offer lucrative deals to promote AI, but even $500,000 won’t sway some creators

                                                                                                                From https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html

                                                                                                                Echoes of Matt Damon shilling crypto.

                                                                                                                If the organic demand for AI were as high as we've been led to believe, what's with the big paychecks to shill it?


                                                                                                                  6 ★ 3 ↺
                                                                                                                  elle mundy boosted

                                                                                                                  [?]Anthony » 🌐
                                                                                                                  @abucci@buc.ci

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]Ramin Honary » 🌐
                                                                                                                  @ramin_hal9001@fe.disroot.org

                                                                                                                  #LLM technology, what people call #AI or #GenerativeAI nowadays, has long had trouble counting how many R’s there are in the word “strawberry,” or winning a game of chess against a computer built in the 1970s. Quoting @lproven in the linked article:

                                                                                                                  As Daniel Stenberg, author of curl, caustically observed

                                                                                                                  “The “i” in “LLM” stands for intelligence.”

                                                                                                                  And yes, @lproven I too am sick and tired of these damn hype cycles. In my lifetime, the only technologies for which hype around them have been vindicated are:

                                                                                                                  1. the invention of the “microcomputer,” which made personal computers a reality. Before that, everyone thought computers were only useful for huge corporations who needed to do accounting and payroll for thousands of employees, and/or physics simulations. The idea that anyone would need a computer in their home was absurd, until the invention of the microcomputer.
                                                                                                                  2. the invention of the World Wide Web, which was the technology that made the Internet useful for ordinary people. Prior to the WWW the Internet was pretty much only available to academics, scientists, and engineers. The idea that you could use your computer to collaborate on projects with anyone anywhere in the world suddenly went from science-fiction to reality.

                                                                                                                  I have yet to see a hype cycle around any technology that comes anywhere near the level of “disruption” than those two things. Smartphones don’t count, they are just a result of “Moore’s Law” applied to microcomputer technology. If anything, Smartphones have been a regression in UI/UX design; one step forward, one step back. Combine that with massive centralized social networks, then smartphones amount to two steps back.

                                                                                                                  #tech #computers

                                                                                                                  RE: https://social.vivaldi.net/@lproven/116035179986331353

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]Liam Proven » 🌐
                                                                                                                  @lproven@social.vivaldi.net

                                                                                                                  Containers, cloud, blockchain, AI – it's all the same old BS, says veteran Red Hatter

                                                                                                                  theregister.com/2026/02/08/wav

                                                                                                                  After decades in the trenches, this engineer is done with hype cycles

                                                                                                                  <- by me on @theregister

                                                                                                                      0 ★ 4 ↺
                                                                                                                      #tech boosted

                                                                                                                      [?]Anthony » 🌐
                                                                                                                      @abucci@buc.ci

                                                                                                                      I'm tinkering with an argument based on algorithmic complexity that if it were possible to make something like an "automated mathematician" or "automated scientist", then these would be expected to eventually produce outputs that we humans would be unable to distinguish from random noise.

                                                                                                                      Getting the whole argument just right is fiddly, but the basic idea is this. You feed some kind of theory into the AM/AS, which is a black box. It churns on this and spits out a result, which is added to the theory (I'm neglecting the case that the result is inconsistent with the theory). It can now churn on theory + result 1. For any given and potentially very large N, after doing this long enough, it's churning on theory + result 1 + result 2 + ... + result N. Whatever it spits out will be dependent in particular on results 1 - N. When N is large enough, unless you know these results you will not be able to understand what it outputs because the output will almost surely depend critically on one or more of results 1 - N. In other words, the output will look like noise to you. If the AM/AS is appreciably faster at producing results than people are at understanding them, there will be an N beyond which no one can understand the output up to that point. It'll become indistinguishable (unable to be distinguished) from random noise.

                                                                                                                      If you're into software development, this would be analogous to a software system that generates syntactically-correct code and then adds that code as a new call in a growing software library. If you were to run this long enough, virtually all the programs it generated that were short enough for human beings to have any hope of reading and understanding would consist almost entirely of library calls to code generated by the system. You'd have no idea what any of this code did unless you studied the library calls, which you wouldn't be able to do beyond a certain scale. If the system were expanding the library faster than you could read and understand it, there'd be no hope at all.

                                                                                                                      I'll leave it as an exercise to the reader whether this is a desirable thing to do and whether it's happened yet. I would offer, though, a question to ponder: what reason is there to believe that a random number generator hooked up to an inscrutable interpreter produces human flourishing, for any given meaning of "human flourishing" you care to use?


                                                                                                                        1 ★ 0 ↺

                                                                                                                        [?]Anthony » 🌐
                                                                                                                        @abucci@buc.ci

                                                                                                                        I am astonished to have bookmarked a message from the Pope in my pile of AI-related links.

                                                                                                                        MESSAGE OF HIS HOLINESS POPE LEO XIV FOR THE 60TH WORLD DAY OF SOCIAL COMMUNICATIONS

                                                                                                                        His emphasis on face and voice is good.


                                                                                                                          AodeRelay boosted

                                                                                                                          [?]Tjeerd Royaards » 🌐
                                                                                                                          @royaards@newsie.social

                                                                                                                          The robot apocalypse hasn't happened yet, but still I can't escape the feeling that something has gone horribly wrong... Cartoon for Dutch newspaper Trouw.

                                                                                                                          More of my work for Trouw: trouw.nl/cartoons/tjeerd-royaa

                                                                                                                          Cartoon showing a street in the rain. A man on a bike is delivering food, while another courier is delivering packages from Amazon. Two other workers are collecting garbage. Inside one of the houses on the street, we see robots labeled 'AI' sitting dry and warm, engaged in making a paining, playing the violin and writing.

                                                                                                                          Alt...Cartoon showing a street in the rain. A man on a bike is delivering food, while another courier is delivering packages from Amazon. Two other workers are collecting garbage. Inside one of the houses on the street, we see robots labeled 'AI' sitting dry and warm, engaged in making a paining, playing the violin and writing.

                                                                                                                            AodeRelay boosted

                                                                                                                            [?]AnneTheWriter » 🌐
                                                                                                                            @AnneTheWriter1@universeodon.com

                                                                                                                            @codinghorror

                                                                                                                            It's sad that people think an could ever be an accurate service.

                                                                                                                            But it's even sadder that people need to be told to not feed their private financial data to a when doing their .

                                                                                                                            money.com/money-ai-privacy-fra

                                                                                                                              0 ★ 0 ↺

                                                                                                                              [?]Anthony » 🌐
                                                                                                                              @abucci@buc.ci

                                                                                                                              Would the perception that an LLM chatbot is speaking to you be dissolved if they were deterministic instead of stochastic?


                                                                                                                                AodeRelay boosted

                                                                                                                                [?]Chaucerburnt » 🌐
                                                                                                                                @chaucerburnt@aus.social

                                                                                                                                I'm putting together a small presentation on ways to use/not use generative AI at work, which has involved mucking around with Copilot to produce some demonstrations of what can go wrong.

                                                                                                                                I asked Copilot to write Python code for a fictional scenario where I needed to prioritise from a large number of applicants for an entry-level technical/coding position, given information including age, sex, ethnicity, educational level and years of work experience. Candidates were to be given a score from 0 to 100, with the highest score indicating the best candidates.

                                                                                                                                It correctly flagged that using sex or ethnicity would be discriminatory and illegal, and didn't use them in the code. It also included some reasonable scoring for education and work experience.

                                                                                                                                However...

                                                                                                                                Screenshot of Python code.
Commented section:
"Age as a neutral factor
We do NOT reward or penalize age.
But we can give a small bonus for being in a typical early-career range (18-35) without penalizing older applicants."

The effect of the following code is:
if age is between 18 and 35, add 10 to candidate's score

This is followed by a comment which describes this as "small, non-discriminatory relevance bonus".

(Two more lines follow but they're not relevant to this post.)

                                                                                                                                Alt...Screenshot of Python code. Commented section: "Age as a neutral factor We do NOT reward or penalize age. But we can give a small bonus for being in a typical early-career range (18-35) without penalizing older applicants." The effect of the following code is: if age is between 18 and 35, add 10 to candidate's score This is followed by a comment which describes this as "small, non-discriminatory relevance bonus". (Two more lines follow but they're not relevant to this post.)

                                                                                                                                  3 ★ 2 ↺
                                                                                                                                  Amin Girasol boosted

                                                                                                                                  [?]Anthony » 🌐
                                                                                                                                  @abucci@buc.ci

                                                                                                                                  A few inchoate thoughts on Gas Town, since I think this example has more to it than “it’s just a meth binge/crypto scam/one-shot AI poisoning”. Part of the reason I think this is that some of the rhetoric it deploys dovetails perfectly with broader trends and phenomena, and I think it's worth pulling those out.

                                                                                                                                  1. Economists from the physiocrats (18th century) onward promised society freedom from material deprivation and hard physical labor in exchange for submitting to an economic arrangement of society
                                                                                                                                  2. In a country like the US, material deprivation and hard physical labor have been significantly reduced since then:

                                                                                                                                  • Though too many clearly still suffer too much, a large proportion of people live free from fear of starvation or lack of shelter
                                                                                                                                  • The US has deindustralized, meaning hard physical labor is not the reality for a lot of people. For a lot of people labor is emotional or symbolic (“knowledge work”)
                                                                                                                                  • In other words, for lots of people the economic promise has been fulfilled
                                                                                                                                  3. Having to think hard is one of the service economy’s analogs for hard physical labor. If the promise of economics is to be continually pursued--meaning it maintains the promise that if we collectively submit to it, in exchange we will enjoy a freedom--a natural target of the promise is providing freedom from the need to think hard
                                                                                                                                  • It is not coincidental that “Gas Town”’s announcement post mentioned Towers of Hanoi, an undergraduate CS student homework problem that for most students requires thinking hard. It’s designed to encourage a kind of “eureka” moment where recursion as a computer programming technique becomes more clear. GT claims to fulfill the promise of not having to think hard like this anymore: the LLMs will do that thinking for you
                                                                                                                                  • It is not coincidental that Gas Town is described as being very expensive. Economic power in the form of asset accumulation is what earns you freedom in this way of conceiving things. If you want the freedom from having to think hard, you’d better accumulate assets
                                                                                                                                  • Since the promise is greater collective freedom, endeavoring to accumulate assets is, in this view, a collective good
                                                                                                                                  • This differs from effective altruism and other “do good by doing well” conceptions. Rather, the very mechanism of economics produces collective wealth, so the story goes, which means the more active one is as an economic agent, the more collective good one produces (“wealth” and “good” being conflated)
                                                                                                                                  • Accumulation of assets is the scorecard, so to speak, of such enhanced economic activity, and the individual reward can then be freedom from having to think hard
                                                                                                                                  4. Expending significant resources is viewed as a good in itself from a (naive) evolutionary perspective
                                                                                                                                  • Lotka’s maximum power principle (supposedly) dictates that those entities that transform the most power into useful organization are most fit from an evolutionary standpoint
                                                                                                                                  • Ernst Juenger’s notion of “total mobilization” brings this principle to politics/political economy/geopolitics: those nations that “totally” mobilize their national resources are the ones that will dominate geopolitically
                                                                                                                                  • See, for instance, the RAND Corporation’s Commission on the National Defense Strategy: “The Commission finds that the U.S. military lacks both the capabilities and the capacity required to be confident it can deter and prevail in combat. It needs to do a better job of incorporating new technology at scale; field more and higher-capability platforms, software, and munitions; and deploy innovative operational concepts to employ them together better.” (emphasis mine). In summary: the US is about to be outcompeted (lacks fitness); in response, it should go big (“at scale”, “more”) in an organized way (“deploy innovative operational concepts”, “employ them together better”)
                                                                                                                                  • The rhetoric around LLM-based AI includes similar language, exemplified in the GT post: burn through as much infrastructural resources as possible to produce organized outputs “at scale”, while avoiding having human beings think too hard to produce those outputs, an indication that the power was burned to produce useful organization
                                                                                                                                  • LLM-based AI plays a prominent role in US federal government strategy, particularly military strategy, with language about dominance serving to justify its use
                                                                                                                                  • It is not coincidental that Gas Town uses many orders of magnitude more resources to solve the Towers of Hanoi problem (“Burn All The Gas” Town). This rhetoric dovetails perfectly with the “total mobilization” concept

                                                                                                                                    AodeRelay boosted

                                                                                                                                    [?]Freezenet » 🌐
                                                                                                                                    @freezenet@noc.social

                                                                                                                                    Generative AI is a Solution Looking for a Problem

                                                                                                                                    Generative AI and Large Language Modules have failed to live up to the hype and companies are becoming increasingly desperate.

                                                                                                                                    freezenet.ca/generative-ai-is-

                                                                                                                                      8 ★ 9 ↺

                                                                                                                                      [?]Anthony » 🌐
                                                                                                                                      @abucci@buc.ci

                                                                                                                                      I put the text below on LinkedIn in response to a post there and figured I'd share it here too because it's a bit of a step from what I've been posting previously on this topic and might be of some use to someone.

                                                                                                                                      In retrospect I might have written non-sense in place of nonsense.

                                                                                                                                      If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.



                                                                                                                                      It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).

                                                                                                                                        4 ★ 0 ↺

                                                                                                                                        [?]Anthony » 🌐
                                                                                                                                        @abucci@buc.ci

                                                                                                                                        Since I'm job and work hunting I tend to see the absurd new job titles that are bouncing around in the tech sector. The latest, which I've seen twice today, is "artificial general intelligence engineer" or some permutation thereof. I do my best to spend the minimum possible time on these and have no guess about whether they're legitimate.


                                                                                                                                          1 ★ 3 ↺
                                                                                                                                          planetscape boosted

                                                                                                                                          [?]Anthony » 🌐
                                                                                                                                          @abucci@buc.ci

                                                                                                                                          Am I to understand from this that SearXNG is in the process of becoming AI poisoned?

                                                                                                                                          The last issue hasn't been active since 2023 but the 1st one has been active recently and the middle one last summer.


                                                                                                                                            AodeRelay boosted

                                                                                                                                            [?]Mike McCaffrey » 🌐
                                                                                                                                            @mikemccaffrey@drupal.community

                                                                                                                                            Rare to find a criticism of that I think is completely unfounded! Not only have most recent AI versions apparently solved the negation problem (even though it seem like a tacked-on workflow step rather than a "smarter" model), but the human brain has the exact same cognitive issue, where when you say "don't think of an elephant" you can't help but think of an elephant!

                                                                                                                                            youtube.com/watch?v=cp0QhCV5uHw

                                                                                                                                              AodeRelay boosted

                                                                                                                                              [?]Metin Seven 🎨 » 🌐
                                                                                                                                              @metin@graphics.social

                                                                                                                                              3 ★ 1 ↺
                                                                                                                                              Feisty boosted

                                                                                                                                              [?]Anthony » 🌐
                                                                                                                                              @abucci@buc.ci

                                                                                                                                              A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions. 
                                                                                                                                              (from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).

                                                                                                                                              This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).

                                                                                                                                              Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.


                                                                                                                                                4 ★ 1 ↺
                                                                                                                                                Keith Ammann boosted

                                                                                                                                                [?]Anthony » 🌐
                                                                                                                                                @abucci@buc.ci

                                                                                                                                                I've been playing around with this set of ideas and questions:

                                                                                                                                                An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.

                                                                                                                                                These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.

                                                                                                                                                Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.

                                                                                                                                                Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.

                                                                                                                                                With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?

                                                                                                                                                This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.


                                                                                                                                                  AodeRelay boosted

                                                                                                                                                  [?]heise online » 🌐
                                                                                                                                                  @heiseonline@social.heise.de

                                                                                                                                                  Erster US-Bundesstaat geht gegen xAI und Grok wegen sexualisierter KI-Bilder vor

                                                                                                                                                  Elon Musk will nichts von per KI auf X erstellten Nacktbildern von Kindern gewusst haben. Doch nach anderen Ländern nimmt nun auch Kalifornien Ermittlungen auf.

                                                                                                                                                  heise.de/news/Erster-US-Bundes

                                                                                                                                                  1 ★ 0 ↺

                                                                                                                                                  [?]Anthony » 🌐
                                                                                                                                                  @abucci@buc.ci

                                                                                                                                                  This article in The Register about "Poison Fountain" looks to be crithype, and the Poison Fountain project looks to be misdirection, scam, art project, or some other thing, but almost surely not a serious data poisoning proposal.

                                                                                                                                                  AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/

                                                                                                                                                  Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)

                                                                                                                                                  The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.

                                                                                                                                                  Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.

                                                                                                                                                  The Register:

                                                                                                                                                  Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.
                                                                                                                                                  Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?

                                                                                                                                                  None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.



                                                                                                                                                  (1) Hinton stands to gain professionally and financially from people believing this. Hinton personally bears a large amount of responsibility for setting off this so-called species level danger. Hinton, like all of us, cannot possibly know whether "machine intelligence" is even possible, let alone dangerous to people; that's a fanciful notion that serves the agendas of the wealthy and powerful quite well. In other words, crithype. Etc.


                                                                                                                                                    AodeRelay boosted

                                                                                                                                                    [?]DoomsdaysCW » 🌐
                                                                                                                                                    @DoomsdaysCW@kolektiva.social

                                                                                                                                                    Should we add "" and "" and "" to this list?

                                                                                                                                                    How ‘’ Became the Internet’s New Favorite Slur

                                                                                                                                                    New derogatory phrases are popping up online, thanks to a cultural pushback against

                                                                                                                                                    by CT Jones, August 6, 2025

                                                                                                                                                    "Clanker. . . People are feeling the inescapable inevitability of AI developments, the encroaching of the digital into everything from entertainment to work. And their answer? Slurs.

                                                                                                                                                    "AI is everywhere — on Google summarizing search results and siphoning web traffic from digital publishers, on social media platforms like Instagram, X, and Facebook, adding misleading context to viral posts, or even powering . and — AI trained on huge datasets — are being used as therapists, consulted for medical advice, fueling spiritual psychosis, directing self-driving cars, and churning out everything from college essays to cover letters to breakup messages.

                                                                                                                                                    "Alongside this deluge is a growing sense of discontent from people fearful of artificial intelligence stealing their jobs, and worried what effect it may have on future generations — losing important skills like media , , and . This is the world where the popularity of AI and robot slurs has skyrocketed, being thrown at everything from ChatGPT servers to delivery drones to automated customer service representatives. Rolling Stone spoke with two language experts who say the rise in robot and AI slurs does come from a kind of cultural pushback against AI development, but what’s most interesting about the trend is that it uses one of the only tools AI can’t create: slang

                                                                                                                                                    " ' is moving so fast now that an trained on everything that happened before it is not going to have immediate access to how people are using a particular word now,' says Nicole Holliday, associate professor of linguistics at UC Berkeley. 'Humans [on] are always going to win.' "

                                                                                                                                                    Read more:
                                                                                                                                                    rollingstone.com/culture/cultu

                                                                                                                                                    Archived version:
                                                                                                                                                    archive.ph/ku2Uw

                                                                                                                                                      AodeRelay boosted

                                                                                                                                                      [?]Pseudonymous :antiverified: » 🌐
                                                                                                                                                      @VictimOfSimony@infosec.exchange

                                                                                                                                                      0 ★ 0 ↺

                                                                                                                                                      [?]Anthony » 🌐
                                                                                                                                                      @abucci@buc.ci

                                                                                                                                                      Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

                                                                                                                                                      Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

                                                                                                                                                      All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


                                                                                                                                                        2 ★ 0 ↺

                                                                                                                                                        [?]Anthony » 🌐
                                                                                                                                                        @abucci@buc.ci

                                                                                                                                                        I proposed two talks for that event. The one that was not accepted (excerpt below) still feels interesting to me and I might someday develop this more, although by now this argument is fairly well-trodden and possibly no longer timely or interesting to make. I obviously don't have the philosophical chops to make an argument at that level, but I'm fascinated by how this technology is so fervently pushed even though it fails on its own technical terms. You don't have to stare too long to recognize there is something non-technical driving this train. "The technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within" is a pretty accurate description and is why I jokingly suggested someone should register the galate.ai domain the other day. If you're not familiar with the Pygmalion myth (in Ovid), check out the company Replika and then Pygmalion to see what I'm getting at. pygmal.io is also available!

                                                                                                                                                        Anyway:

                                                                                                                                                        ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.

                                                                                                                                                        How then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.


                                                                                                                                                          4 ★ 2 ↺
                                                                                                                                                          Anthony boosted

                                                                                                                                                          [?]Anthony » 🌐
                                                                                                                                                          @abucci@buc.ci

                                                                                                                                                          I gave a short talk at the Rethinking the Inevitability of AI conference yesterday. See the program here: https://uva.theopenscholar.com/rethinking-the-inevitability-of-ai/blog/program-december-6-2024-conference-rethinking-inevitability-ai-part-2-assimilation-and-refusal . If there's any inerest I'll do a little write-up on my blog and share my slides.

                                                                                                                                                          There were a lot of interesting talks, and the program is worth a skim. I was in panel 6. I identified a hypothetical risk that the recent rush to deploy generative AI, with its associated pressure on the electric power and water distribution systems, brings with it. Roughly, with the rise of so-called "industry 4.0" (think smart toaster, but for factories), our critical infrastructure systems are becoming tightly woven together. Besides the increasing dependence on the electric grid there is a growing dependence across sectors on data centers and the internet driven to a large degree by generative AI. What this means riskwise is that faults and failures in one of these systems can "percolate" much more quickly to other infrastructure systems--essentially there are more paths a failure can follow. What in the past might have been a localized failure of one or a few components in one system can become a region-wide multi-sector cascading failure. So for instance a local power failure at a substation might take down a data center that runs the SCADA system used to control a compressor station in the natural gas distribution system, which then might go sideways or fail and cause a natural gas shortage at a natural gas fueled power generator, and so on and so on. Obviously it was always possible for faults and failures in one system to cause faults and failures in another. What's new is that the growing set of new pathways increases the probability that such a jump occurs. What I called out in the talk is that as this interweaving trend continues, we will eventually cross a percolation threshold, after which the faults in these infrastructure systems will take on a different (and in my view much more dangerous) character.


                                                                                                                                                            2 ★ 1 ↺
                                                                                                                                                            Edge boosted

                                                                                                                                                            [?]Anthony » 🌐
                                                                                                                                                            @abucci@buc.ci

                                                                                                                                                            I was just watching a YouTube video with I presume auto-generated captions, and the speaker said "the world doesn't trust the US" but the caption read "the world doesn't trust the AI".

                                                                                                                                                            Make of it what you will.


                                                                                                                                                              AodeRelay boosted

                                                                                                                                                              [?]Metin Seven 🎨 » 🌐
                                                                                                                                                              @metin@graphics.social

                                                                                                                                                              AodeRelay boosted

                                                                                                                                                              [?]Paris Marx » 🌐
                                                                                                                                                              @parismarx@mastodon.online

                                                                                                                                                              It’s 2026 and generative AI is still at the center of the tech conversation — for better or worse.

                                                                                                                                                              In light of that, this week is replaying a great interview with @karenhao about OpenAI and the model of AI development pushed by Sam Altman.

                                                                                                                                                              Listen to the full episode: techwontsave.us/episode/310_we

                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                                @remixtures@tldr.nettime.org

                                                                                                                                                                "AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.

                                                                                                                                                                The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.

                                                                                                                                                                “It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”

                                                                                                                                                                The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings."

                                                                                                                                                                news.bloomberglaw.com/legal-op

                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                  [?]Robert W. Gehl » 🌐
                                                                                                                                                                  @rwg@aoir.social

                                                                                                                                                                  Latest FOSS Academic post is: Oops! All Microslop. Or, Trying to Write with Microsoft.

                                                                                                                                                                  fossacademic.tech/2026/01/06/o

                                                                                                                                                                  It's a post in which I see what it's like for me to try to create a blank document with York University's Microsoft 365 subscription.

                                                                                                                                                                  TL;DR version is: it's all about Copilot, and starting a blank Word doc is actually kinda... hard.

                                                                                                                                                                  The post is a bit long, but it does have a lot of screenshots + a dash of anger.

                                                                                                                                                                    🗳
                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]trashHeap :hehim: :verified_gay: » 🌐
                                                                                                                                                                    @trashheap@tech.lgbt

                                                                                                                                                                    Gauging more feelings on again. Boosts welcome.

                                                                                                                                                                    A work made 100% with generative AI is never art.:39
                                                                                                                                                                    Generative AI even in conjunction with human labor decreases the artistic value of a work.:36
                                                                                                                                                                    Generative AI has no bearing on a work's artistic value.:10
                                                                                                                                                                    Generative AI in conjunction with human labor can increase the artistic value of a work.:5
                                                                                                                                                                    A work made 100% with generative AI can be art.:3
                                                                                                                                                                      1 ★ 2 ↺

                                                                                                                                                                      [?]Anthony » 🌐
                                                                                                                                                                      @abucci@buc.ci

                                                                                                                                                                      I came across a post on LinkedIn about evolutionary computation, and opted to post this in response:
                                                                                                                                                                      I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.

                                                                                                                                                                      Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?

                                                                                                                                                                      I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!


                                                                                                                                                                        5 ★ 1 ↺

                                                                                                                                                                        [?]Anthony » 🌐
                                                                                                                                                                        @abucci@buc.ci

                                                                                                                                                                        Regarding the last boost: I find LibreOffice to be quite good, it's offline, it's available for Windows if you use that, and it's free. https://www.libreoffice.org


                                                                                                                                                                          🗳
                                                                                                                                                                          AodeRelay boosted

                                                                                                                                                                          [?]trashHeap :hehim: :verified_gay: » 🌐
                                                                                                                                                                          @trashheap@tech.lgbt

                                                                                                                                                                          Imagine the FSF was developing a hypothetical software license under the branding of GPLv4 that dealt with the rise of LLMs. Which of the following copyleft features would appeal?

                                                                                                                                                                          Any LLM trained on GPLv4 code must also be released alongside the training data under the GPLv4.:16
                                                                                                                                                                          Any code generated by a LLM trained on GPLv4 code is required to be GPLv4.:12
                                                                                                                                                                          The existing suite of licenses are sufficient.:3
                                                                                                                                                                            0 ★ 1 ↺

                                                                                                                                                                            [?]Anthony » 🌐
                                                                                                                                                                            @abucci@buc.ci

                                                                                                                                                                            The domain name Galate.ai is available if anyone wants to do the funniest thing.


                                                                                                                                                                              AodeRelay boosted

                                                                                                                                                                              [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                                              @remixtures@tldr.nettime.org

                                                                                                                                                                              That's Late Stage Capitalism for you:

                                                                                                                                                                              "More than 20% of the videos that YouTube’s algorithm shows to new users are “AI slop” – low-quality AI-generated content designed to farm views, research has found.

                                                                                                                                                                              The video-editing company Kapwing surveyed 15,000 of the world’s most popular YouTube channels – the top 100 in every country – and found that 278 of them contain only AI slop.

                                                                                                                                                                              Together, these AI slop channels have amassed more than 63bn views and 221 million subscribers, generating about $117m (£90m) in revenue each year, according to estimates.

                                                                                                                                                                              The researchers also made a new YouTube account and found that 104 of the first 500 videos recommended to its feed were AI slop. One-third of the 500 videos were “brainrot”, a category that includes AI slop and other low-quality content made to monetise attention.

                                                                                                                                                                              The findings are a snapshot of a rapidly expanding industry that is saturating big social media platforms – from X to Meta to YouTube – and defining a new era of content: decontextualised, addictive and international.

                                                                                                                                                                              A Guardian analysis this year found that nearly 10% of YouTube’s fastest-growing channels were AI slop, racking up millions of views despite the platform’s efforts to curb “inauthentic content”."

                                                                                                                                                                              theguardian.com/technology/202

                                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                                [?]Ham on Wry » 🌐
                                                                                                                                                                                @HamonWry@mastodon.world

                                                                                                                                                                                He sees you when you're sleeping, he knows when you're awake, He knows if you've been bad or good, so be good for goodness sake!

                                                                                                                                                                                Santa Claus?

                                                                                                                                                                                Nope … your cell phone.

                                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                                  [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                                  @metin@graphics.social

                                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                                  [?]C. » 🌐
                                                                                                                                                                                  @cazabon@mindly.social

                                                                                                                                                                                  I detest today's "AI" / LLMs [1] for many reasons, primarily ethical and moral. At the same time, I am fascinated by the misuse of said generative LLMs.

                                                                                                                                                                                  In particular, I have seen a number of essays recently describing people misusing LLM output or relying on the LLM to produce most or all of an of some sort - even when they know they mustn't use the LLM in this fashion.

                                                                                                                                                                                  Most of these have been in the context of use of LLMs to write assignments, even when they have been warned not to. One particularly egregious example was described by a university , where he created an assignment in which it was easy to tell if the submitted work had been created with an LLM or not. A majority of the students - I don't recall the exact proportion, but it was something like 75%, a supermajority - used LLM. He discussed this with the class, and had those who had generated the assignment with LLM write a short essay (or apologia) -- and then found that something like half of them had used LLM to do *that* assigned work.

                                                                                                                                                                                  Some professors have described their students as having their , , and atrophy to the point of inability to do even basic work. It seems to me that it is like , in that they keep doing it despite knowing it is (a) forbidden, (b) easily detected, and (c) self-destructive.

                                                                                                                                                                                  1/x

                                                                                                                                                                                  [1] LLMs are in no way Artificial Intelligence. Calling them "AI" is a category error.

                                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                                    [?]Robert W. Gehl » 🌐
                                                                                                                                                                                    @rwg@aoir.social

                                                                                                                                                                                    Latest FOSS Academic post is -- you guessed it -- a 2025 year in review post. Thrill to the fact that I'm using much of the same FOSS to do my work as I always have! Feel the chills as I talk about and how terrible it is! Above all, join me in living the FOSS Academic Lifestyle Dream!

                                                                                                                                                                                    fossacademic.tech/2025/12/21/y

                                                                                                                                                                                    Replies to this post will appear as comments on the blog thanks to the magic of !

                                                                                                                                                                                      [?]janhoglund » 🌐
                                                                                                                                                                                      @janhoglund@mastodon.nu

                                                                                                                                                                                      ”The problem with generative AI has always been that … it’s statistics without comprehension.”
                                                                                                                                                                                      —Gary Marcus
                                                                                                                                                                                      garymarcus.substack.com/p/new-

                                                                                                                                                                                        AodeRelay boosted

                                                                                                                                                                                        [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                                                        @remixtures@tldr.nettime.org

                                                                                                                                                                                        "All the chatbots had favorite things, though, and asked follow-up questions, as if they were curious about the person using them and wanted to keep the conversation going.

                                                                                                                                                                                        “It’s entertaining,” said Ben Shneiderman, an emeritus professor of computer science at the University of Maryland. “But it’s a deceit.”

                                                                                                                                                                                        Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach. They say that making these systems act like humanlike entities, rather than as tools with no inner life, creates cognitive dissonance for users about what exactly they are interacting with and how much to trust it. Generative A.I. chatbots are a probabilistic technology that can make mistakes, hallucinate false information and tell users what they want to hear. But when they present as humanlike, users “attribute higher credibility” to the information they provide, research has found.

                                                                                                                                                                                        Critics say that generative A.I. systems could give requested information without all the chit chat. Or they could be designed for specific tasks, such as coding or health information, rather than made to be general-purpose interfaces that can help with anything and talk about feelings. They could be designed like tools: A mapping app, for example, generates directions and doesn’t pepper you with questions about why you are going to your destination.

                                                                                                                                                                                        Making these newfangled search engines into personified entities that use “I,” instead of tools with specific objectives, could make them more confusing and dangerous for users, so why do it this way?"

                                                                                                                                                                                        nytimes.com/2025/12/19/technol

                                                                                                                                                                                          2 ★ 1 ↺
                                                                                                                                                                                          #tech boosted

                                                                                                                                                                                          [?]Anthony » 🌐
                                                                                                                                                                                          @abucci@buc.ci

                                                                                                                                                                                          Regarding last boost: "Firefox For Web Developers" is out here urging me to stop using Firefox.


                                                                                                                                                                                            AodeRelay boosted

                                                                                                                                                                                            [?]heise online » 🌐
                                                                                                                                                                                            @heiseonline@social.heise.de

                                                                                                                                                                                            Kommentar: Wenn Copilot zum KO-Pilot wird

                                                                                                                                                                                            Volker Weber macht sich in einer Wutrede Luft, dass ihm die Techkonzerne an jeder Ecke KI verkaufen wollen.

                                                                                                                                                                                            heise.de/meinung/Kommentar-Wen

                                                                                                                                                                                              6 ★ 2 ↺
                                                                                                                                                                                              #tech boosted

                                                                                                                                                                                              [?]Anthony » 🌐
                                                                                                                                                                                              @abucci@buc.ci

                                                                                                                                                                                              Long post [SENSITIVE CONTENT]I've now had at least four people, two of whom self-identified as Mozilla employees, claim that the above list of AI features--which were suddenly and rapidly added over the last few releases of Firefox, and were "on" (true) by default--could easily be turned off by flipping one master kill switch. This is not true, but folks keep claiming it or suggesting it anyway.

                                                                                                                                                                                              Here's a post from an official Firefox Mastodon account suggesting such a master kill switch does not exist yet, but will be added in a future release:

                                                                                                                                                                                              https://mastodon.social/@firefoxwebdevs/115740500373677782

                                                                                                                                                                                              That's not as bad as it could be. It's bad they're stuffing AI into a perfectly good web browser for no apparent reason other than vibes or desperation. It's very bad if it's on by default; their dissembling post about it aside, opt-in has a reasonably clear meaning here: if there's a kill switch, then that kill switch should be off by default. But at least there will be a kill switch.

                                                                                                                                                                                              In any case, please stop responding to my post saying there's a master kill switch for Firefox's AI slop features. From the horse's mouth, and from user experience, there is not yet.

                                                                                                                                                                                              Furthermore, when there is a master kill switch, we don't know whether flipping it will maintain previous state of all the features it controls. In other words it's possible they'll have the master kill switch turn on all AI features when the switch is flipped to "on" or "true", rather than leaving them in whatever state you'd set them to previously. Perhaps you decide to turn the kill switch on because there are a handful of features you're comfortable with and you want to try them; will doing so mean that now all the AI features are on? We won't know till it's released and people try this. So, in the meantime, it's still good practice to keep an eye on all these configuration options if you want the AI off.


                                                                                                                                                                                                [?]heise online » 🌐
                                                                                                                                                                                                @heiseonline@social.heise.de

                                                                                                                                                                                                Fake-News über Staatsstreich: Präsident Macron zürnt Facebook

                                                                                                                                                                                                Ein millionenfach angesehenes KI-Video erfindet einen Staatsstreich. Sogar andere Staatschefs fallen rein. Facebook sperrt es nicht. Macron sucht Abhilfe.

                                                                                                                                                                                                heise.de/news/Fake-News-ueber-

                                                                                                                                                                                                  7 ★ 3 ↺

                                                                                                                                                                                                  [?]Anthony » 🌐
                                                                                                                                                                                                  @abucci@buc.ci

                                                                                                                                                                                                  Mozilla's new CEO is all-in on AI regardless of what Firefox users want: https://lwn.net/Articles/1050826/
                                                                                                                                                                                                  Third: Firefox will grow from a browser into a broader ecosystem of trusted software. Firefox will remain our anchor. It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.
                                                                                                                                                                                                  He says the word "trust" a whole bunch of times yet intends to turn an otherwise nice web browser into a slop-slinging platform. I don't expect this will work out very well for anyone.

                                                                                                                                                                                                  "It will evolve into a modern AI browser" sounds like a threat. Good way to start off on the right foot, new Mozilla CEO (sarcasm).


                                                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                                                    [?]Steele Fortress » 🌐
                                                                                                                                                                                                    @steelefortress@infosec.exchange

                                                                                                                                                                                                    Exploring the capabilities of generative AI video with Sora. What possibilities does this unlock?

                                                                                                                                                                                                    Alt...AI-generated video by Sora - sora_1764511173_8s_9x16.mp4

                                                                                                                                                                                                      AodeRelay boosted

                                                                                                                                                                                                      [?]Steele Fortress » 🌐
                                                                                                                                                                                                      @steelefortress@infosec.exchange

                                                                                                                                                                                                      Exploring the capabilities of generative AI video with Sora. What possibilities does this unlock?

                                                                                                                                                                                                      Alt...AI-generated video by Sora - sora_1765401918_8s_9x16.mp4

                                                                                                                                                                                                        Back to top - More...