buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #bias

[?]Jan :rust: :ferris: » 🌐
@janriemer@floss.social

@olivia

> And remember! These people and companies in AI started destroying academia and work and oversight well before the release of ChatGPT.

This! People have warned about the harmful effects of algorithms on our for _literally_ decades now:

RubyConf 2015 - Keynote: Consequences of an Insightful Algorithm by Carina C. Zona
youtube.com/watch?v=Vpr-xDmA2G4

Biased bots: Human prejudices sneak into AI systems (April 2017):

bath.ac.uk/announcements/biase

1/2

    AodeRelay boosted

    [?]Proto Himbo European » 🌐
    @guyjantic@infosec.exchange

    Just scrolled through some physician-focused subreddit discussions of GLP-1 drugs (e.g., Ozempic). If you want to see anti-science biases driven by subculture norms...

    Science is very clear about the effectiveness of behavioral weight loss treatments: It's absolutely awful. So low nobody should invest in any of them. If those success rates were applied to other medical issues, we'd be almost as angry about weight loss programs as we are about "pray the gay away" camps (not quite that angry... but close). There's also (IIRC) at least a little research on the standard MD "intervention" of just telling patients they need to eat less and exercise more (spoiler: even less effective than Weight Watchers).

    The only treatments for being overweight with more than inconsistent and minimal success are surgery and drugs. That's it.

    Now look at what MDs say to each other. They discuss how "unethical" it is to prescribe GLP-1 drugs for people who haven't shown behavioral evidence of "commitment" or "seriousness" about weight loss by following a strict diet/exercise regimen for a specific time period (usually a year or more, from what I've seen). So much patting each other on the back about the highly responsible action of refusing GLP-1 meds to people who either aren't overweight enough (i.e., they experience many health and other consequences, but the MD has a BMI line in their head the patient hasn't crossed, yet) or haven't done enough exercise or diet to convince the specific MD that they deserve the medications.

    Please think about how ridiculous this is: thousands (or millions?) of medical professionals refusing to give tens or hundreds of millions of people a treatment that works until those people grind away at a treatment that doesn't work for a certain amount of time.

    In case someone is going to show up and tell me "it's calories in/calories out!" Yes, of course it is. If it's so simple, why are literally billions of people struggling with that equation every day? You might as well tell people suffering from depression "it's just getting regular exercise, social interaction, and satisfying experiences every day" or someone with ADHD "It's just a matter of focusing more." Medical doctors might as well refuse to provide statins etc. to people with high blood pressure unless they show evidence of strict diet and exercise adherence for a year, first. Actually, that's not far from what they are saying with GLP-1s, and of course there are even some MDs who refuse to provide treatment for depression or ADHD until the people with those conditions "prove" they can beat the condition without any treatment.

    The behavior is the problem: motivation is in your brain, which gets hijacked by fat cells and, basically, a million years of evolution. Sure, if you ignore behavior it's easy to solve behavioral issues. Everyone seems to recognize this until the behavioral issue gets too close to some programming from their childhood that touches issues of morality, responsibility, deservingness, goodness, etc. Then the science and rationality go out the window, except as a thin fig leaf for personal biases.

      AodeRelay boosted

      [?]Wen » 🌐
      @Wen@mastodon.scot

      0 ★ 0 ↺

      [?]Anthony » 🌐
      @abucci@buc.ci

      Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

      Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

      All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


        [?]petersuber » 🌐
        @petersuber@fediscience.org

        Early in the pandemic (April 2020) I started what became a long thread on in academic .
        twitter.com/petersuber/status/

        Starting today, I'm stopping it on Twitter and continuing it on .

        Here's a rollup of the complete Twitter thread.
        resee.it/tweet/125298113985535

        Here's a nearly complete archived version in the @waybackmachine.
        web.archive.org/web/2022090813

        Watch this space for updates.


        @academicchatter

        🧵

          [?]petersuber » 🌐
          @petersuber@fediscience.org

          Update. New study using to assess referee reports: "Female first authors received less polite reviews than their male peers… In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author."
          elifesciences.org/articles/902

            AodeRelay boosted

            [?]Edwin G. :mapleleafroundel: » 🌐
            @EdwinG@mstdn.moimeme.ca

            The 4 biases of health influencers

            // Article in French //
            - - -
            Les 4 biais des influenceurs•euses en santé

            sciencepresse.qc.ca/actualites

            AodeRelay boosted

            [?]Wen » 🌐
            @Wen@mastodon.scot

            It is remarkable what sort of news the BBC do not wish to cover. I don’t watch their television so must take that on trust, but looking at the BBC news site, this is not even touched upon. Hunger strikes in NI, and more recent ones, yes, but this no. I do wonder who the corporation represents - it certainly is not most of the UK.

            archive.today/2025.12.08-20160

            The report linked - a picture of the BBC HQ - the title suggest a protest, there is one person there, text reads


BBC slammed for not covering Palestine prisoners' hunger strike

            Alt...The report linked - a picture of the BBC HQ - the title suggest a protest, there is one person there, text reads BBC slammed for not covering Palestine prisoners' hunger strike

            The result of a search for hunger strike on the BBC news site - not a word of this

            Alt...The result of a search for hunger strike on the BBC news site - not a word of this

              [?]ᴮᵉⁿ ᴿᵒʸᶜᵉVOTE IN THE PRIMARIES » 🌐
              @benroyce@mastodon.social

              Yup, that's the 😒 😒 😒

              Alt...A group of people present fictional, preposterous but unfortunately credible New York Times OpEd titles that betray a disgusting bias https://www.instagram.com/xiandivyne