buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #gpt

[?]Victoria Stuart 🇨🇦 🏳️‍⚧️ » 🌐
@persagen@mastodon.social

Agentic / coding LLM (SoTA? 2026-03-06) - Claude Opus 4.6 | Anthropic
anthropic.com/news/claude-opus
news.ycombinator.com/item?id=4

Introducing GPT-5.3-Codex | OpenAI
openai.com/index/introducing-g
news.ycombinator.com/item?id=4

Building a C compiler with a team of parallel Claudes | Anthropic
anthropic.com/engineering/buil
We tasked Opus 4.6 using agent teams to build a C Compiler
news.ycombinator.com/item?id=4

    1 ★ 0 ↺

    [?]Anthony » 🌐
    @abucci@buc.ci

    As a ratio with the general S&P 500, Microsoft is at the level it was before ChatGPT launched. Any relative advantage one might have had from long-term investment in Microsoft instead of an S&P 500 index has been erased.


      [?]Steve Troughton-Smith » 🌐
      @stroughtonsmith@mastodon.social

      If you're curious about what's happening behind the scenes, this is the setup prompt I'm passing to . I’m defining a JSON data structure, and redefining what a palette is in the process (as GPT sees the word 'palette' and tries to output 5 colors every time if I don’t). I tell it to be fun and creative in its palette generation and naming, and to follow related topics for keywords. The results speak for themselves, though it does still have a habit of duplicating palettes under diff names

        AodeRelay boosted

        [?]Daltux » 🌐
        @daltux@snac.daltux.net

        Conteúdo escatológico [SENSITIVE CONTENT]:coryDoctorow: explica... mas pode ser merdificado à vontade: quem usa isso, como sempre ocorre nesse processo, vai continuar a sentar e rolar na bosta todinha! :enshittification: :jamesonLaugh:


          0 ★ 0 ↺

          [?]Anthony » 🌐
          @abucci@buc.ci

          Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

          Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

          All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


            2 ★ 0 ↺

            [?]Anthony » 🌐
            @abucci@buc.ci

            I proposed two talks for that event. The one that was not accepted (excerpt below) still feels interesting to me and I might someday develop this more, although by now this argument is fairly well-trodden and possibly no longer timely or interesting to make. I obviously don't have the philosophical chops to make an argument at that level, but I'm fascinated by how this technology is so fervently pushed even though it fails on its own technical terms. You don't have to stare too long to recognize there is something non-technical driving this train. "The technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within" is a pretty accurate description and is why I jokingly suggested someone should register the galate.ai domain the other day. If you're not familiar with the Pygmalion myth (in Ovid), check out the company Replika and then Pygmalion to see what I'm getting at. pygmal.io is also available!

            Anyway:

            ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.

            How then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.


              AodeRelay boosted

              [?]Lorry » 🌐
              @lorry@infosec.exchange

              @dashremover I genuinely find incredibly useful for doing a lot of systems work (though GPT 4o is _much_ better than 5.2), but I have to treat it as though I just got an idiot savant operator forced upon me on a Youth Training Scheme programme.

              Generally, it's amazing, it can do command line stuff I had no idea was even possible, and it's amazing at installing things on obscure and obsolete setups, but you still have to check every single thing it does that you know isn't reversible.

              I use GPT a lot, and with the switch from 4 to 5, I think has been pretty reckless, ironically, if you ask why, it will tell you. It's geared towards generating conversation and interaction rather than just following instructions cleanly - And I guess wiping out all of your users' files is a talking point that could generate loads of fun data for the OpenAI datamoles.

                AodeRelay boosted

                [?]jack » 🌐
                @jglypt@universeodon.com

                I love how my cursor 2025 wrapped is just peaking out on the launches of new models when they’re free for a week 😭

                cursor 2025 wrapped chart showing peaks in months for launches of models like GPT 4.1, GPT 5 and Grok Code Fast 1

                Alt...cursor 2025 wrapped chart showing peaks in months for launches of models like GPT 4.1, GPT 5 and Grok Code Fast 1

                  AodeRelay boosted

                  [?]jack » 🌐
                  @jglypt@universeodon.com

                  AodeRelay boosted

                  [?]AI6YR Ben » 🌐
                  @ai6yr@m.ai6yr.org

                  The Guardian: AI’s safety features can be circumvented with poetry, research finds

                  Poems containing prompts for harmful content prove effective at duping large language models

                  theguardian.com/technology/202

                    AodeRelay boosted

                    [?]Ben Rothke » 🌐
                    @benrothke@infosec.exchange

                    Custom GPTs, introduced in Nov. 2023 by @openai allow users to customize models to meet specific organizational or individual needs. GPTs customization is something that must be dealt with cautiously. Lots of unknowns, many & risks.
                    api.cyfluencer.com/s/what-are-

                      2 ★ 2 ↺
                      planetscape boosted

                      [?]Anthony » 🌐
                      @abucci@buc.ci

                      The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
                      From ChatGPT Is a Gimmick,
                      https://hedgehogreview.com/web-features/thr/posts/chatgpt-is-a-gimmick


                        AodeRelay boosted

                        [?]Chad :mstdn: » 🌐
                        @chad@mstdn.ca

                        Our community league needed to get off of our old website as it was completely broken and no one had time to fix it.

                        I had an hour or so to play with ChatGPT this evening. I asked for a single-page website, and this is what I created. A perfectly fine website that does the basics while we build our new website.

                        LLMs have their place, albeit not anywhere near residential communities, but this worked just fine.

                        inglewoodcl.com/

                          6 ★ 7 ↺

                          [?]Anthony » 🌐
                          @abucci@buc.ci

                          When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the further degradation of an already degraded educational system. We agree that we would rather deplete our natural resources than make our own art or think our own thoughts. We dig ourselves deeper into crises that have been made worse by technology, from the erosion of electoral democracy to the intensification of climate change. We condone platforms that not only urge children to commit suicide, they instruct them on how to tie the noose. We hand over our autonomy, at the very moment of emerging American fascism.
                          Yes +1.


                            7 ★ 12 ↺

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            12 ★ 6 ↺
                            planetscape boosted

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            This misguided trend has resulted, in our opinion, in an unfortunate state of affairs: an insistence on building NLP systems using ‘large language models’ (LLM) that require massive computing power in a futile attempt at trying to approximate the infinite object we call natural language by trying to memorize massive amounts of data. In our opinion this pseudo-scientific method is not only a waste of time and resources, but it is corrupting a generation of young scientists by luring them into thinking that language is just data – a path that will only lead to disappointments and, worse yet, to hampering any real progress in natural language understanding (NLU). Instead, we argue that it is time to re-think our approach to NLU work since we are convinced that the ‘big data’ approach to NLU is not only psychologically, cognitively, and even computationally implausible, but, and as we will show here, this blind data-driven approach to NLU is also theoretically and technically flawed.
                            From Machine Learning Won't Solve Natural Language Understanding, https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/


                              4 ★ 4 ↺
                              planetscape boosted

                              [?]Anthony » 🌐
                              @abucci@buc.ci

                              Regarding the last couple boosts: among other downsides, LLMs encourage people to take long-term risks for perceived, but not always actual, short-term gains. They bet the long-term value of their education on a chance at short-term grade inflation, or they bet the long-term security and maintainability of their software codebase on a chance at short-term productivity gains. My read is that more and more data is suggesting that these are bad bets for most people.

                              In that respect they're very much like gambling. The messianic fantasies some ChatGPT users have been experiencing fits this picture as well.


                                44 ★ 64 ↺

                                [?]Anthony » 🌐
                                @abucci@buc.ci

                                Revealed: How the UK tech secretary uses ChatGPT for policy advice
                                New Scientist has used freedom of information laws to obtain the ChatGPT records of Peter Kyle, the UK's technology secretary, in what is believed to be a world-first use of such legislation
                                From https://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/
                                These records show that Kyle asked ChatGPT to explain why the UK’s small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on “Limited Awareness and Understanding”, “Regulatory and Ethical Concerns” and “Lack of Government or Institutional Support”.
                                Apparently it didn't say "because it's unhelpful and probably harmful to most SMB problems" or "what on earth are you doing asking a computer this you fool?".


                                  2 ★ 5 ↺
                                  Mario Angst boosted

                                  [?]Anthony » 🌐
                                  @abucci@buc.ci

                                  Absolutely bizarre that a company publicly claiming to be on the verge of making one of the most remarkable advances in computer science in generations--artificial general intelligence--is adding bog standard features present in every calendaring app or productivity suite for decades. I'm reading this as an indication of where they are: trying to make what they currently have "sticky" to improve their DAU/MAU numbers because they don't anticipate their actual product, LLMs, will achieve that.

                                  ChatGPT now lets you schedule reminders and recurring tasks: https://techcrunch.com/2025/01/14/chatgpt-now-lets-you-schedule-reminders-and-recurring-tasks



                                    8 ★ 6 ↺

                                    [?]Anthony » 🌐
                                    @abucci@buc.ci

                                    Lately I've been thinking of the much-hyped generative AI products of the last 2 years as anti-technology.

                                    That may sound odd, since these are clearly technological artifacts in the way we've come to understand technology, and they're being produced by what's commonly called the tech sector. However, there are at least two ways in which these artifacts differ markedly from what we usually (used to?) think of as "technology":

                                    (1) They tend to have a deskilling effect. The English word "technology" ultimately derives from the Greek word "tekhnē", which can be interpreted as meaning skill or craft. It's very much about a human being's ability to perform a task. Yet much of generative AI is aimed at removing human beings from a task, or minimizing our involvement. In that sense generative AI is very much anti-tekhnē

                                    (2) They tend to lie squarely in what Albert Borgmann called "the device paradigm". L.M. Sacasas has several nice takes on Borgmann's distinction between devices and focal things. See https://theconvivialsociety.substack.com/p/why-an-easier-life-is-not-necessarily and also https://theconvivialsociety.substack.com/p/the-stuff-of-a-well-lived-life (and of course, read Borgmann himself!). Simply put, devices tend to hide their inner workings under a simplified "interface"; a device is a device “if it has been rendered instantaneous, ubiquitous, safe, and easy”, if it hides the means in favor of the ends. By contrast, focal objects tend to invite you into fully experiencing the focal practices they enable, to experience the means and the ends. In particular, they tend not to be easy: you have to engage with and learn to use them. Guitars are an interesting example of focal objects. To be (I hope not overly) simplistic, devices dumb you down while focal objects train you up. Devices are anti-tekhnē, and to the extent that current generative AI is created and deployed in the device paradigm, it is too.

                                    None of this means generative AI has to be anti-tekhnē. I do admit though that I struggle to see how to make it less device-y, at least as it's currently made and used (I do have a few half-formed thoughts along these lines but nothing worth sharing).


                                      3 ★ 1 ↺

                                      [?]Anthony » 🌐
                                      @abucci@buc.ci

                                      One of my cynical takes about this "chain of thought": it buys OpenAI compute time. They are setting the expectation that you have to sit and wait for the system to "answer" you. If users of these tools expect to have to wait many seconds to minutes for an answer, OpenAI can then:

                                      1. More efficiently schedule their GPU compute, reducing that cost (which by all accounts is out of control)
                                      2. Hide human beings in the loop, like virtually every "AI" company does, tasked with repairing the LLMs output before it reaches the end user
                                      3. Potentially charge more: the byzantine "reasoning tokens" they discuss is a pricing lever; the time-to-answer introduces a perception that the output is better, and therefore worth paying more for


                                        15 ★ 14 ↺

                                        [?]Anthony » 🌐
                                        @abucci@buc.ci

                                        I can't recommend the Mystery AI Hype Theater 3000 podcast highly enough, for a dozen reasons. But have a look at this:

                                        https://peertube.dair-institute.org/w/gAAnkju7qjfrWjG9NZVy2L?start=2m31s

                                        @emilymbender@dair-community.social and @alex@dair-community.social are unpacking a recent OpenAI press release in the form of a blog post titled "Learning to Reason with LLMs" (1). At the point in the video I linked above, Prof Bender is commenting on the "Contributions" section of the blog post. She expands it, and notes that it resembles what one might see in a proper scientific publication. But this is a blog post, and really it's a press release, not a scientific publication. The rest of the episode is well worth watching to see just how misleading this press release is and why you should not believe any of the claims made in it. No, LLMs can't reason like a PhD or win coding competitions.

                                        Since the beginning of this AI hype cycle I've been arguing here and elsewhere that companies are putting lab coats on their corporate whitepapers and press releases in a cynical attempt to afford them more credibility than they deserve. They are gaming the academic publishing system to circulate them, flooding (a scientific and scholarly pre-print server) with whitepapers and press releases formatted to look like academic writing, even getting low-quality "articles" in some of Nature's publications and web sites (see e.g. https://theluddite.org/post/replika.html ). It looks like they're also formatting their blog posts to resemble peer-reviewed scholarly or scientific work. This is a parasitical way of behaving, and personally I believe it's being done deliberately and consciously.

                                        Richard Feynman (racistly (2)) referred to this sort of thing as "cargo cult science". It's all surface with no depth, and it's not intended to further human knowledge.

                                        (1) In case you're curious no, what OpenAI is describing is not reasoning, neither technically speaking nor intuitively speaking.
                                        (2) I'm very sorry but I think what Feynman gets at in this essay is important and I don't know any other accessible source for the critique he makes.

                                          2 ★ 1 ↺

                                          [?]Anthony » 🌐
                                          @abucci@buc.ci

                                          If I had the time, energy, and education to pull it off, I'd do some scholarship and writing elaborating on this juxtaposition:

                                          - Statistics, as a field of study, gained significant energy and support from eugenicists with the purpose of "scientizing" their prejudices. Some of the major early thinkers in modern statistics, like Galton, Pearson, and Fisher, were eugenicists out loud; see https://nautil.us/how-eugenics-shaped-statistics-238014/
                                          - Large language models and diffusion models rely on certain kinds of statistical methods, but discard any notion of confidence interval or validation that's grounded in reality. For instance, the LLM inside GPT outputs a probability distribution over the tokens (words) that could follow the input prompt. However, there is no way to even make sense of a probability distribution like this in real-world terms, let alone measure anything about how well it matches reality. See for instance https://aclanthology.org/2020.acl-main.463.pdf and Michael Reddy's The conduit metaphor: A case of frame conflict in our language about language

                                          Early on in this latest AI hype cycle I wrote a note to myself that this style of AI is necessarily biased. In other words, the bias coming out isn't primarily a function of biased input data (though of course that's a problem too). That'd be a kind of contingent bias that could be addressed. Rather, the bias these systems exhibit is a function of how the things are structured at their core, and no amount of data curating can overcome it. I can't prove this, so let's call it a hypothesis, but I believe it.


                                            4 ★ 4 ↺
                                            BadWoof boosted

                                            [?]Anthony » 🌐
                                            @abucci@buc.ci

                                            The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning.

                                              13 ★ 16 ↺

                                              [?]Anthony » 🌐
                                              @abucci@buc.ci

                                              I guess the hype has been wearing off long enough that thoughtful critique can be published:
                                              Participants who used AI produced fewer ideas, with less variety and lower originality compared to a baseline.
                                              From https://dl.acm.org/doi/full/10.1145/3613904.3642919 , titled The Effects of Generative AI on Design Fixation and Divergent Thinking

                                              This is something I'd expect because of anchoring. I'd expect a similar phenomenon when using generative AI to help with other kinds of ideation like writing or coding. Basically, your creative process gets stuck--anchored--on the first couple ideas. If you're generating those ideas with AI without taking account of this phenomenon, your overall process will tend to be less creative than if you hadn't used the AI. Surely there are ways to mitigate this effect but you have to be aware of it and practiced in those ways.


                                                1 ★ 3 ↺

                                                [?]Anthony » 🌐
                                                @abucci@buc.ci

                                                The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising. There is a longstanding tradition of overselling the latest technology, claiming it to be the next (industrial) revolution or promising that it will outperform human beings. With the passage of time it may become difficult to recognize these invented ideas and images that have acquired a life of their own and have become integrated as part of a historical narrative. As modern, digital electronic computing is nearing its 100th anniversary, such recognition does not become easier, though we may be in need of it more than ever before.

                                                This particular case, where the praise of automatic programming implied the obsolescence of the coder, can be instructive for us today. There is a line that runs from Grace Hopper’s selling of “automatic coding” to today’s promises of large AI models such as Chat-GPT for revolutionizing computing by automating programming or even making human programmers obsolete.19,20 Then as now, it is certainly the case that the automation of some parts of programming is progressing, and it will upset or even redefine the division of labor. However, this is not a simple straightforward process that replaces the human element in one or more specific phases of programming by the computer itself. Rather, practice adopts new techniques to assist with existing tasks and jobs. Such changes do not generalize easily, and using titles as like “coders”—or today’s “prompt engineers,”—while memorable, does not do justice to the subtle process of changing practice.

                                                From https://cacm.acm.org/opinion/the-myth-of-the-coder/


                                                  3 ★ 1 ↺
                                                  St. Chris boosted

                                                  [?]Anthony » 🌐
                                                  @abucci@buc.ci

                                                  The university has an obligation to interrogate the proposition that a world in which AI is widely used is desirable or inevitable. We don’t need to cheer for a vision of tomorrow in which scientists feel comfortable with not personally reading the articles their peers have written and students are not expected to gain insight through wrestling with complex concepts: a world in which creative and knowledge work is delegated to a mindless algorithm.
                                                  From: https://uniavisen.dk/en/cut-the-ai-bullshit-ucph/


                                                    3 ★ 1 ↺
                                                    planetscape boosted

                                                    [?]Anthony » 🌐
                                                    @abucci@buc.ci

                                                    By design it's very easy to forget when using a tool like that you are gifting your ideas to a(n American) corporation that could, if it chose, lock the ideas down and keep them out of the public domain, potentially even preventing you yourself from discussing them publicly. By using such tools you're in essence trusting them not to do that. Personally I think that trust is badly misplaced.


                                                      2 ★ 0 ↺

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      I feel like it'd be an interesting art experiment to use a like a to generate hundreds of millions of pages of bureaucratic text for a society that never existed.

                                                      I'm tempted to call something like this culture jamming because as the traditional mass media changes (descends?) in influence, the primary "mass" we're left with that we all experience similarly is bureaucracy and its offshoots. Bureaucracy is one way power is exercised so exposing how it operates is important work.


                                                        2 ★ 0 ↺

                                                        [?]Anthony » 🌐
                                                        @abucci@buc.ci

                                                        Lately I've been thinking of recent generative AI as the equivalent of fast food for the mind.


                                                          1 ★ 2 ↺
                                                          planetscape boosted

                                                          [?]Anthony » 🌐
                                                          @abucci@buc.ci

                                                          Academic authors 'shocked' after Taylor & Francis sells access to their research to Microsoft AI
                                                          https://www.thebookseller.com/news/academic-authors-shocked-after-taylor--francis-sells-access-to-their-research-to-microsoft-ai

                                                          This will keep happening. For-profit academic publishers should never have been allowed to exist in the first place; those chickens are now coming home to roost.


                                                            0 ★ 0 ↺

                                                            [?]Anthony » 🌐
                                                            @abucci@buc.ci

                                                            “Self-checkout could have been better integrated into supermarkets in ways that augmented and better-facilitated interactions between frontline workers and shoppers, but that design goal goes against the very reason they were adopted by retailers in the first place—to cut labor costs,” Mateescu says. “So the tool has a self-defeating logic to it from the get-go.”
                                                            From https://gizmodo.com/why-self-checkout-is-and-has-always-been-the-worst-1833106695 ; see also https://www.bloodinthemachine.com/p/understanding-the-real-threat-generative .

                                                            Generative AI could be an instance of what self-checkout or automated customer service phone trees are: what Brian Merchant ( @brianmerchant@mastodon.social ) dubs "shitty automation" in the first link, with a similar kind of self-defeating logic built into it from the start. If the only purpose of a technology is to cut labor costs for corporations, then there's a decent chance everyone, including the corporations deploying the technology, eventually suffers from it. One of the weirdest things to witness in this latest AI hype cycle is the enthusiastic deployment of a technology that may end up producing worse products and services at greater expense than the laid-off people who used to produce those products and services. It also threatens to leave behind a kind of technological detritus: even companies that recognize the cost and poor quality won't be able to remove it once it's entrenched. Why all the enthusiasm?

                                                            Obviously I have no way of knowing whether that's what comes to pass, but a lot of signs are pointing that way. I've already had a few conversations with people, including a CEO, who told me they felt like they had no choice but to deploy generative AI somehow, because all their customers were doing so. But like the most-photographed barn--which is the most-photographed barn because it's famous for being the most-photographed barn--this kind of justification is ephemeral white noise. You are putting yourself into a dangerous position by following it: this ouroboros will eat your business once it's done swallowing its own tail. Very strange to witness.


                                                              8 ★ 12 ↺

                                                              [?]Anthony » 🌐
                                                              @abucci@buc.ci

                                                              A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

                                                              If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

                                                              The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

                                                              If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not develop those capacities when experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could?

                                                              Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe even a Nobel-prize-caliber one. It would would also be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else. In the absence of tangible results, it's quite literally magical thinking to assert neural networks have this capacity that even human beings lack.


                                                                17 ★ 7 ↺

                                                                [?]Anthony » 🌐
                                                                @abucci@buc.ci

                                                                so much of the promise of generative AI as it is currently constituted, is driven by rote entitlement.

                                                                Very nice analysis by Brian Merchant ( @brianmerchant@mastodon.social ) here: https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

                                                                He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about . Clearly there are many angles from which to come at what's going on with , but I appreciate this one quite a bit.


                                                                  3 ★ 1 ↺
                                                                  Mark Hurst boosted

                                                                  [?]Anthony » 🌐
                                                                  @abucci@buc.ci

                                                                  Who decided to call it ChatGPT when they could have called it Microsoft Expel?

                                                                  (Inspired by @markhurst@mastodon.social 's slip of the tongue in the latest episode of Techtonic).


                                                                    1 ★ 0 ↺

                                                                    [?]Anthony » 🌐
                                                                    @abucci@buc.ci

                                                                    I read this quote from the Moby Dick bot
                                                                    And what do you pick your teeth with, after devouring that fat goose? With a feather of the same fowl.
                                                                    ( from https://botsin.space/@mobydick/112412619011989792 )

                                                                    right after reading about StackOverflow selling out to OpenAI. I will leave puzzling out the analogy relating to two as an exercise to the reader.


                                                                      1 ★ 2 ↺
                                                                      MOULE :Logo: boosted

                                                                      [?]Anthony » 🌐
                                                                      @abucci@buc.ci

                                                                      The information oil spill caused by continues to claim casualties: https://www.404media.co/bards-and-sages-closing-ai-generated-writing/
                                                                      In a notice posted to the [Bards And Sages Publishing] site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote.
                                                                      Closure announcement: https://www.bardsandsages.com/closure-announcement.html


                                                                        2 ★ 3 ↺
                                                                        planetscape boosted

                                                                        [?]Anthony » 🌐
                                                                        @abucci@buc.ci

                                                                        I think A.I. is a poison to the creative process. I think it makes your work worse and makes you less interesting and less employable.
                                                                        From: https://www.muddycolors.com/2024/04/the-a-i-lie/ by David Palumbo . I strongly recommend reading the whole post if this topic is of interest. I think it's absolutely spot on. This writer is coming from the perspective of a creative professional, but I'd expand this statement to include both scientific research and software development, two areas in which I've worked myself. By using the current generation of AI tools you are trading short-term convenience for long-term degradation, something I've analogized here before as eating your seed corn.

                                                                        It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you. A.I. is a service. You cede control and decisions to an A.I. in the way you might to an independent contractor hired to do a job that you do not want to or are unable to do. This is important to how using A.I. in a creative workflow will influence your end result. You are, at best, taking on a collaborator. And this collaborator happens to be a mindless average aggregate of data.
                                                                        Another excellent point among many. I've called AI a tool in the past but I'm convinced by this framing of it as a service and would-be collaborator. Calling the current generation of AI a set of tools is providing cover for its more degrading effects.

                                                                        Given all of this, I can not personally see anything attractive about using these programs in any capacity. Though they might dramatically speed up or replace parts of a workflow, the short and long term costs are appalling to me. It is removing my own hand, the single most valuable asset I possess, from the creation of my work.
                                                                        Absolutely. If you do knowledge work, it's removing your own brain and mind from your work. If you've spent years and money educating yourself in a subject, why would you choose to throw that away and give over part of your work to a computer program?


                                                                          1 ★ 2 ↺

                                                                          [?]Anthony » 🌐
                                                                          @abucci@buc.ci

                                                                          IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close

                                                                          From https://www.statnews.com/2017/09/05/watson-ibm-cancer/

                                                                          That was 2017, a mere 7 years ago. How are we falling for this same con again so soon?


                                                                            3 ★ 1 ↺
                                                                            planetscape boosted

                                                                            [?]Anthony » 🌐
                                                                            @abucci@buc.ci

                                                                            Regarding the last boost: if there were a functional left politics in the United States, it would have pushed back vigorously and relentlessly against the cloud--which includes subscription-based "software as a service" along with rented storage--and would still be doing so today. In Marx's terms, the cloud removes the means of production from the hands of the workers and places them under the control of corporations. In that way the movement of most digital work into the cloud is analogous to the trends the Luddites were fighting against, with the movement of skilled weaving work into factories performed by loom operators and subsequent deskilling of weavers.

                                                                            This should have been vigorously resisted as it was unfolding, but it was not as far as I can remember. It should be vigorously opposed now, but it is not. Data centers, our modern mills, are consuming vast quantities of critical resources like electric power and clean water, to the point that there are communities struggling to provide these resources to human beings who live there. Yet the pushback against this expansion is muted, and data centers are expanding rapidly. Where is the left's response to this corporate seizure of the means of production?

                                                                            People are worried about generative AI taking jobs, and rightly so, but I think these concerns point to an overarching trend towards a kind of digital feudalization. Generative AI is already created by taking peoples' hard work without any compensation. You're permitted to use the technology "free of charge", but you can't pay the rent or mortgage, or buy food, with ChatGPT output. This essentially renders all of us as peasants.

                                                                            The threat from bosses that you could be fired and replaced with generative AI, even if false, presses down wage demands and encourages doing work for no compensation. In this climate, people feel compelled to learn how to use generative AI to do their work because they perceive (again, probably rightly) that if they don't do that they will eventually find themselves without employment opportunities. Once again, if you're in a position of doing uncompensated work like this on behalf of a powerful entity, you are in a relationship distressingly similar to the one a peasant was in to a lord in the feudal system.

                                                                            I'm not saying anything new here, just thinking out loud. But doesn't the left have anything to say, loudly proudly and often, about this? These are bread and butter issues for the left, aren't they?


                                                                              2 ★ 1 ↺
                                                                              planetscape boosted

                                                                              [?]Anthony » 🌐
                                                                              @abucci@buc.ci

                                                                              The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

                                                                              More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

                                                                              From https://theconversation.com/weve-been-here-before-ai-promised-humanlike-machines-in-1958-222700

                                                                              Not to nitpick, but I'd argue "In a lot of ways, not much". Not much substantively, anyway. We have 66 years of Moore's Law and data gathering, which has made the biggest difference by far. We have some important advances in how to train ML models, though I'd argue lots of these fall on the engineering side of things more than the deep understanding of how stuff works side of things.

                                                                              This critique is not meant to diminish how difficult and important any of these particular advances were. Rather, it's that I believe scale alone is not what lies between machine learning and human intelligence. We should not be claiming that we're moments away from artificial general intelligence. I believe that however remarkable the outputs of or or what have you may be, they are still just as far from human-quality intelligence as Rosenblatt's perceptron was.

                                                                              In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.
                                                                              We've been hearing this same song for a long time.

                                                                              (Before diving into my mentions to explain AI and ML to me, please be aware that I have a PhD in computer science from 2007. My PhD advisor made significant advances in aritificial neural networks research in the 1980s. I closely read the original papers on this subject, including Rosenblatt's and all three volumes of the PDP collection, and surveyed some of them in the introductory chapter of my dissertation. I built large scale ML systems in industry before it was "a thing". I'm happy to have a conversation about all this stuff but variations of "you're wrong" are unwelcome no matter how politely phrased. Thanks).


                                                                                4 ★ 2 ↺
                                                                                planetscape boosted

                                                                                [?]Anthony » 🌐
                                                                                @abucci@buc.ci

                                                                                LLMs and the hype surrounding them are a multi-pronged attack on natural language. The hype and pseudoscience about them debauch the meaning of cognitive, technical, and scientific terms. LLMs (especially the chatbot ones) invite the conflation of human language use with emission of semantics-free token sequences.

                                                                                Stay vigilant! Natural language is ancient human technology that should not be given up so easily.


                                                                                  14 ★ 8 ↺

                                                                                  [?]Anthony » 🌐
                                                                                  @abucci@buc.ci

                                                                                  Sorry. Your Car Will Never Drive You Around.

                                                                                  Brutal takedown of the bullsh&*# that is "self-driving cars": https://www.youtube.com/watch?v=2DOd4RLNeT4

                                                                                  It's a long video but frankly you can get the gist of most of it by scanning over the chapter titles. "Hitting Fake Children". "Hitting Real Children". "FSD Expectations" is a long list of the various lies has told about "full self driving" Teslas. Also the "Openpilot" chapter has a picture of Elon Musk's face as a dartboard.

                                                                                  The endless hype and full-on lies of the self-driving-car con from roughly 2016 to 2020 resembles the about like going on right now. If you've been in this industry long enough and have been honest with yourself about it you've seen all this before. Until something significant changes we really ought to view anything coming out of the tech sector with deep suspicion (https://bucci.onl/notes/Another-AI-Hype-Cycle).


                                                                                    2 ★ 1 ↺
                                                                                    planetscape boosted

                                                                                    [?]Anthony » 🌐
                                                                                    @abucci@buc.ci

                                                                                    I feel like the majority of people who critique hedge what they're saying with something like "don't get me wrong, what they do is amazing."

                                                                                    Is it though?

                                                                                    What's up with this hedge? Am I missing something? To me, what they do isn't particularly amazing or interesting. Dangerous, given how they've been constructed and how they're being deployed, but interesting? Maybe it's because I've been in CS a long time and have seen incremental movement in this direction for decades, but I don't understand it.

                                                                                    When IBM's DeepBlue first beat Garry Kasparov at chess under tournament conditions in 1997, I felt somewhat similarly. Even though it too was a newsworthy and noteworthy event, its occurrence was expected:
                                                                                    - Alan Turing wrote about evaluation-function-based greedy play of chess that undergirds DeepBlue in the 1940s/1950s (obviously many advancements had been made since then but Turing laid out the fundamental idea)
                                                                                    - There's a paper I can never seem to find again that came out of IBM I think in the late 1970s, possibly 1980s, that used Moore's Law to project when a computer program would achieve grandmaster level chess play, using the Elo ratings of the implemented computer chess players known at the time. Their projection was the year 2000, 3 years after DeepBlue's first victory over Kasparov.

                                                                                    In Claude Shannon's The Mathematical Theory Of Communication, from 1948, there is a sequence of what he calls approximations to English (Sect 3. "The Series of Approximations To English" on page 14 in this version: https://www.essrl.wustl.edu/~jao/itrg/shannon.pdf), by way of illustrating the relevance of the Markov models he was describing. , the second-order word approximation, is weird but reasonable-looking English text. It is clear that if the order of the Markov model were increased from 2 to 5 or 10 or 100, the reasonableness of the output text would improve. I've both taken and taught computer science classes where we tinkered with this idea, extending the order of a model and observing what happened. I'm not aware of any attempts to project this forward in time using Moore's law as IBM did with chess players, and the criteria for deciding the quality of the output is a lot less crisp for language than Elo score. A further issue arises with the quantity of data available that does not affect chess, too. Even so, I don't think it was ever unreasonable to believe it was only a matter of time before a Markov or Markov-style method like Shannon's, run on a powerful-enough computer and with a large enough data set, would produce text close to what a human being might (in other words, not with the bizarre nonsense you often see in Markov bots). It was always really a question of having enough compute power, having enough data, and having a way to represent such a high-order Markov chain. Today we have NVIDIA, the internet, and various permutations of deep neural networks to solve those problems.

                                                                                    Call me jaded, but I see these things like DeepBlue or ChatGPT explode on the scene and one of my first thoughts is like "oh, that was so-and-so's thing from the 1940s that someone finally operationalized."

                                                                                    Just to be clear, I'm not pretending that I can predict when things will happen, and I certainly can't predict the societal impact that technological advances have. I'm just reflecting on why I'm not impressed by our latest hype cycle.

                                                                                    Anyway, that's a long-winded way of saying that it's OK to critique / , or , or without adding the hedge that what they do is amazing. It's OK to believe they're the outcome of a predictable process that's been going on for decades.

                                                                                      3 ★ 0 ↺

                                                                                      [?]Anthony » 🌐
                                                                                      @abucci@buc.ci

                                                                                      Hot take: is digital

                                                                                      ...in the sense that it's hurting people, it's screwing up a lot of things that are important to me in large part because people aren't careful about it and the powers-that-be are drunk, and I wish it would go away but it probably won't any time soon.


                                                                                        5 ★ 2 ↺

                                                                                        [?]Anthony » 🌐
                                                                                        @abucci@buc.ci

                                                                                        If we allow large language model ( ) development to proceed along its present course, we will eventually end up paying rent to write something.

                                                                                        1. Free, helpful writing tools! Cool!
                                                                                        2. Large influx of users who slowly lose the ability to work without the tools
                                                                                        3. Surprise! There's a monthly subscription fee now
                                                                                        4. Profit


                                                                                          0 ★ 1 ↺
                                                                                          planetscape boosted

                                                                                          [?]Anthony » 🌐
                                                                                          @abucci@buc.ci

                                                                                          A bit random, but has anyone tried using (clustering based on) compression distance to detect AI-generated text and images?


                                                                                            4 ★ 4 ↺

                                                                                            [?]Anthony » 🌐
                                                                                            @abucci@buc.ci

                                                                                            Just to clarify the point I was making yesterday about arXiv, below I've included a plot from arXiv's own stats page https://info.arxiv.org/help/stats/2021_by_area/index.html . The image contains two charts side-by-side. The chart on the left is a stacked area chart tracking the number of submissions to each of several arXiv categories through time, from 1991 to 2021. I obtained this screenshot today; arXiv's site, at time of writing, says the chart had been updated 3 January 2022. The caption to this plot on the arXiv page I linked has more detail about it.

                                                                                            What you're seeing here is that for most categories, there is a linear increase in the number of submissions to the category year-over-year up until the end of the data series in 2021. Computer science is dramatically different: its increase looks exponential, and it looks like its rate of increase may have accelerated circa 2017. The chart on the right, which is the same data shown proportional instead of as raw counts, suggests computer science might be "eating" mathematics starting around 2017.

                                                                                            2017 is around when generative AI papers started to appear in large quantities. There was a significant advance in machine learning published around 2018 but known before then that made deep learning significantly more effective. Tech companies were already pushing this technology. (the / maker) was founded in 2015; GPT-2 was released in early 2019. arXiv's charts don't show this, but I suspect these factors play a role in the seeming phase shift in their CS submissions in 2017.

                                                                                            We don't know what 2022 and 2023 would look like on a chart like this but I expect the exponential increase will have continued and possibly accelerated.

                                                                                            In any case, this trend is extremely concerning. The exponential increase in number of submissions to what is supposed to be an academic pre-print service is not reasonable. There hasn't been an exponential increase in the number of computer scientists, nor in research funding, nor in research labs, nor in the output-per-person of each scientist. Furthermore, these new submissions threaten to completely swamp all other material: before long computer science submissions will dwarf those of all over fields combined; since this chart stops at 2021 they may have already! arXiv's graphs do not break down the CS submissions by subtopic, but I suspect they are in the machine learning/generative AI/LLM space and that submissions on these topics dwarf the other subdisciplines of computer science. Finally, to the extent that arXiv has quality controls in place for its archive, these can't possibly keep up with an exponentially-increasing rate of submissions. They will eventually fail if they haven't already (as I suggested in a previous post I think there are signs that their standards are slipping; perhaps that started circa 2017 and that's partly why the rate of submissions accelerated then?).


                                                                                            Description is in the body of the post.

                                                                                            Alt...Description is in the body of the post.

                                                                                              1 ★ 2 ↺
                                                                                              planetscape boosted

                                                                                              [?]Anthony » 🌐
                                                                                              @abucci@buc.ci

                                                                                              A computer's successful performance is often taken as evidence that it or its programmer understand a theory of its performance. Such an inference is unnecessary and, more often than not, is quite mistaken. The relationship between understanding and writing thus remains as problematical for computer programming as it has always been for writing in any other form.
                                                                                              --Joseph Weizenbaum, Computer Power and Human Reason


                                                                                                74 ★ 38 ↺

                                                                                                [?]Anthony » 🌐
                                                                                                @abucci@buc.ci

                                                                                                Dear Scientists and Researchers,

                                                                                                You can do research without using large language models and without putting large language models into existing systems you're building. You're still allowed. It's OK and everything will be fine, I promise.

                                                                                                Best regards,
                                                                                                Anthony


                                                                                                  15 ★ 9 ↺
                                                                                                  emenel boosted

                                                                                                  [?]Anthony » 🌐
                                                                                                  @abucci@buc.ci

                                                                                                  Among the many reasons we should resist the widespread application of generative an important, if less concrete, one is to preserve the freedom to change. This class of method crystallizes the past and present and re-generates it over and over again. The net result, if it's used en masse, is foreclosing the future.

                                                                                                  If you're stats-poisoned: human flourishing requires the joint distribution of the future to be different from that of the past and present. We, collectively, form a non-stationary system, and forcing the human system to be stationary is a kind of violence.


                                                                                                    10 ★ 7 ↺
                                                                                                    noisediver boosted

                                                                                                    [?]Anthony » 🌐
                                                                                                    @abucci@buc.ci

                                                                                                    Claude Shannon invented LLMs in 1948. See The Mathematical Theory of Communication Sect 3: THE SERIES OF APPROXIMATIONS TO ENGLISH.

                                                                                                    Shannon said earlier in the same paper:

                                                                                                    Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
                                                                                                    I put this here to highlight a curiosity. An intellectual predecessor of ChatGPT was very explicitly developed without semantics--without meaning--involved. Yet nowadays folks want to argue that these systems are capable of representing meaning, or have understanding, or perform other semantic tasks. No one has yet explained why increasing the order of the Markov chain Shannon discusses in his article changes the system from a meaning-less word emitter into a semantically meaningful intelligent entity. Why does that transition occur? When? This is an extraordinary claim akin to saying if you had a big enough Excel spreadsheet it would become intelligent. It requires extraordinary evidence.



                                                                                                      0 ★ 2 ↺
                                                                                                      planetscape boosted

                                                                                                      [?]Anthony » 🌐
                                                                                                      @abucci@buc.ci

                                                                                                      's "working papers" feel to me like a deliberate assault on scientific understanding. The latest Mystery AI Hype Theater 3000 episode taking apart OpenAI's recent one ("GPTs are GPTs": https://www.twitch.tv/videos/2040566319 ). The reasoning in this paper is so comically bad it's impossible for me to believe the people over there don't realize it's a mockery of scientific research.

                                                                                                      Since stands to benefit from introducing FUD into computer science publishing, this raises some big red flags for me.


                                                                                                        62 ★ 20 ↺
                                                                                                        2xfo boosted

                                                                                                        [?]Anthony » 🌐
                                                                                                        @abucci@buc.ci

                                                                                                        To put it differently, these tools and techniques are drawing out the value of works created by creative people without replenishing the originators of that value. That's a horribly dehumanizing way of looking at it, as if people were value spigots, but that's the problem isn't it? This is a dehumanzing arrangement. We don't need to be doing this.


                                                                                                          475 ★ 425 ↺
                                                                                                          2xfo boosted

                                                                                                          [?]Anthony » 🌐
                                                                                                          @abucci@buc.ci

                                                                                                          Regarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.

                                                                                                          LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.

                                                                                                          Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.


                                                                                                            3 ★ 4 ↺
                                                                                                            planetscape boosted

                                                                                                            [?]Anthony » 🌐
                                                                                                            @abucci@buc.ci

                                                                                                            From https://danmcquillan.org/house_of_lords.html
                                                                                                            The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.

                                                                                                            From https://danmcquillan.org/ai_thatcherism.html
                                                                                                            One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce.

                                                                                                            Same source:
                                                                                                            Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour.

                                                                                                            Dan McQuillan (@danmcquillan@kolektiva.social) has been excellent on AI .