buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
> And remember! These people and companies in AI started destroying academia and #ethical work and oversight well before the release of ChatGPT.
This! People have warned about the harmful effects of #AI algorithms on our #society for _literally_ decades now:
RubyConf 2015 - Keynote: Consequences of an Insightful Algorithm by Carina C. Zona
https://www.youtube.com/watch?v=Vpr-xDmA2G4
Biased bots: Human prejudices sneak into AI systems (April 2017):
https://www.bath.ac.uk/announcements/biased-bots-human-prejudices-sneak-into-ai-systems/
1/2
Just scrolled through some physician-focused subreddit discussions of GLP-1 drugs (e.g., Ozempic). If you want to see anti-science biases driven by subculture norms...
Science is very clear about the effectiveness of behavioral weight loss treatments: It's absolutely awful. So low nobody should invest in any of them. If those success rates were applied to other medical issues, we'd be almost as angry about weight loss programs as we are about "pray the gay away" camps (not quite that angry... but close). There's also (IIRC) at least a little research on the standard MD "intervention" of just telling patients they need to eat less and exercise more (spoiler: even less effective than Weight Watchers).
The only treatments for being overweight with more than inconsistent and minimal success are surgery and drugs. That's it.
Now look at what MDs say to each other. They discuss how "unethical" it is to prescribe GLP-1 drugs for people who haven't shown behavioral evidence of "commitment" or "seriousness" about weight loss by following a strict diet/exercise regimen for a specific time period (usually a year or more, from what I've seen). So much patting each other on the back about the highly responsible action of refusing GLP-1 meds to people who either aren't overweight enough (i.e., they experience many health and other consequences, but the MD has a BMI line in their head the patient hasn't crossed, yet) or haven't done enough exercise or diet to convince the specific MD that they deserve the medications.
Please think about how ridiculous this is: thousands (or millions?) of medical professionals refusing to give tens or hundreds of millions of people a treatment that works until those people grind away at a treatment that doesn't work for a certain amount of time.
In case someone is going to show up and tell me "it's calories in/calories out!" Yes, of course it is. If it's so simple, why are literally billions of people struggling with that equation every day? You might as well tell people suffering from depression "it's just getting regular exercise, social interaction, and satisfying experiences every day" or someone with ADHD "It's just a matter of focusing more." Medical doctors might as well refuse to provide statins etc. to people with high blood pressure unless they show evidence of strict diet and exercise adherence for a year, first. Actually, that's not far from what they are saying with GLP-1s, and of course there are even some MDs who refuse to provide treatment for depression or ADHD until the people with those conditions "prove" they can beat the condition without any treatment.
The behavior is the problem: motivation is in your brain, which gets hijacked by fat cells and, basically, a million years of evolution. Sure, if you ignore behavior it's easy to solve behavioral issues. Everyone seems to recognize this until the behavioral issue gets too close to some programming from their childhood that touches issues of morality, responsibility, deservingness, goodness, etc. Then the science and rationality go out the window, except as a thin fig leaf for personal biases.
Amusing, when a mistake is made the Polis double down….
I hope he gets significant damages.
Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
Early in the pandemic (April 2020) I started what became a long #Twitter thread on #gender #bias in academic #publishing.
https://twitter.com/petersuber/status/1252981139855355904
Starting today, I'm stopping it on Twitter and continuing it on #Mastodon.
Here's a rollup of the complete Twitter thread.
https://resee.it/tweet/1252981139855355904
Here's a nearly complete archived version in the @waybackmachine.
https://web.archive.org/web/20220908134128/https://twitter.com/petersuber/status/1252981139855355904
Watch this space for updates.
🧵
Update. New study using #ChatGPT to assess referee reports: "Female first authors received less polite reviews than their male peers… In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author."
https://elifesciences.org/articles/90230
The 4 biases of health influencers
// Article in French //
- - -
Les 4 biais des influenceurs•euses en santé
https://www.sciencepresse.qc.ca/actualites-scientifiques/2025/12/09/4-biais-influenceurs-sante
It is remarkable what sort of news the BBC do not wish to cover. I don’t watch their television so must take that on trust, but looking at the BBC news site, this is not even touched upon. Hunger strikes in NI, and more recent ones, yes, but this no. I do wonder who the corporation represents - it certainly is not most of the UK.