buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #aiethics

AodeRelay boosted

[?]Xavier Ashe :donor: » 🌐
@Xavier@infosec.exchange

I'm really proud of my little AI project. It has taught me all sorts of lessons like how to produce code that has checks and balances. Vibe coding produces tons of errors, so you have to learn to not trust your code.

Part of my checks and balances was to codify terms of service. When Meta bought Moltbook, the lawyers were quick to publish new T&Cs. Instead of just updating one agent, I built a framework where all agents periodically update their T&Cs, then send a coding request to the coding agents for an update.

I have two layers of control layers. I have an architect agent that reviews all PRs and commits. Then I have a guardian agent that is my runtime controls. If it sees an agent breaking the rules, it can respond or alert.

Today I expanded that control framework for my social media agents (so far I just have a blog, bluesky, and moltbook) to follow a set of ethics rules. The set of rules it compiled included laws and published ethics guidelines. It feels good trying to build "good" AI agents.

Here's Askew's take on it. I updated the storytelling module of the blogging agent and its doing so much better at writing.

write.as/askew/trust-is-the-pr

    AodeRelay boosted

    [?]Taran Rampersad » 🌐
    @knowprose@mastodon.social

    Much of the AI ethics conversation focuses on principles.

    Transparency.
    Fairness.
    Accountability.

    But ethics depends on capability.

    And capability depends on who controls the data.

    Data sovereignty determines ethical sovereignty.

    realityfragments.com/2026/03/1

    UNESCO infographic illustrating four core values for AI ethics: respect for human rights and dignity, peaceful and interconnected societies, diversity and inclusiveness, and environmental sustainability.

    Alt...UNESCO infographic illustrating four core values for AI ethics: respect for human rights and dignity, peaceful and interconnected societies, diversity and inclusiveness, and environmental sustainability.

      AodeRelay boosted

      [?]Rumpy » 🌐
      @rumpppy@expressional.social

      What are the ethical implications of advancements in artificial intelligence and robotics?

        Yogi Jaeger boosted

        [?]Per Axbom » 🌐
        @axbom@axbom.me

        A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.

        https://axbom.com/hammer-ai/

        #AiEthics #DigitalEthics

        Poster: If a hammer was like AI. 

Illustration of a hammer (a line drawing) and the quote "It's just a tool".  Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. 

Obscured data theft.
It copies the design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those.  

Bias & injustice.
By design, the hammer will most often just hit the thumb of Black, Brown and underserved people. 

Carbon cost.
The energy use is about 100 times greater than achieving a similar result with other tools. 

Monoculture and power concentration.
The hammer is made by a small, western and wealthy subset of humanity – creating costly barriers to entry.

Invisible decision-making.
Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably.  

Accountability projection.
If the hammer breaks and hurts someone, the manufacturer will claim the hammer has “a mind of its own” and they can’t help you. 

Jerry-building (Misinformation).
Optimised for building elaborate structures that don’t hold up to scrutiny.  

Data/privacy breaches.
May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to be part of its development. 

Moderator trauma.
Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course.  

The footer reads: "You can’t own it, but you can subscribe. Perpetually." 

CC BY-SA Per Axbom. version 2, June 2023

        Alt...Poster: If a hammer was like AI. Illustration of a hammer (a line drawing) and the quote "It's just a tool". Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. Obscured data theft. It copies the design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those. Bias & injustice. By design, the hammer will most often just hit the thumb of Black, Brown and underserved people. Carbon cost. The energy use is about 100 times greater than achieving a similar result with other tools. Monoculture and power concentration. The hammer is made by a small, western and wealthy subset of humanity – creating costly barriers to entry. Invisible decision-making. Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably. Accountability projection. If the hammer breaks and hurts someone, the manufacturer will claim the hammer has “a mind of its own” and they can’t help you. Jerry-building (Misinformation). Optimised for building elaborate structures that don’t hold up to scrutiny. Data/privacy breaches. May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to be part of its development. Moderator trauma. Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course. The footer reads: "You can’t own it, but you can subscribe. Perpetually." CC BY-SA Per Axbom. version 2, June 2023

          AodeRelay boosted

          [?]Taran Rampersad » 🌐
          @knowprose@mastodon.social

          We need to be careful of scapegoating when humans are responsible.

          And we also need to hold those that poisoned the tree to account.

          We should not penalize a minority for the actions of a group of people that do bad things.

          knowprose.com/2026/02/ai-ethic

            AodeRelay boosted

            [?]knoppix » 🌐
            @knoppix95@mastodon.social

            Google’s Gemini A.I. now scans your entire inbox to “help” you summarize, reply & organize. 📬
            That’s not assistance — that’s surveillance wrapped in productivity branding. 🔍

            If your emails need an opt‑out clause, maybe the feature shouldn’t exist by default. ⚠️

            🔗 nytimes.com/2026/01/15/technol

              AodeRelay boosted

              [?]ReallyCanadianFly » 🌐
              @reallyflygreg@mstdn.ca

              Finally a great user for AI. Basically a programmer used it to siphon all the user data from a white supremacist dating site. That's a very basic explanation that doesn't come close to revealing the magnitude of the awesome here!

              okstupid.lol/

                AodeRelay boosted

                [?]Jody Hughes » 🌐
                @Gaolaitch@cupoftea.social

                Ah damn. I strongly suspect that the ‘junior’ staffer at the accountancy firm, with which* I have been in email correspondence over the past few weeks, is an LLM in disguise.

                I’ve paid the firm for tax accountancy services not yet rendered, which I now deeply regret.

                Is this the new normal for a business? If I’d known I would have found a different accountant. Ugh ugh ugh.

                *correct grammar

                  AodeRelay boosted

                  [?]myrmepropagandist » 🌐
                  @futurebird@sauropods.win

                  Can you help me brainstorm some thorny "AI Ethics Puzzles"

                  These are little scenarios meant to act as a starting point in discussions about the ethics of AI.

                  I will post some examples in response to this post, but I'd love to find some even more thorny ones.

                  Ideally a puzzle shouldn't have a totally obvious solution when presented to people with a wide range of views. Make up something, or share what you have encountered.

                    AodeRelay boosted

                    [?]Renatomancer » 🌐
                    @Renatomancer@vmst.io

                    AodeRelay boosted

                    [?]TechNadu » 🌐
                    @technadu@infosec.exchange

                    A researcher reported a major data exposure involving an AI image-generation tool where over one million files were stored in an unprotected database. The issue was responsibly disclosed and later secured.

                    The case highlights ongoing concerns around:
                    • Image-dataset security
                    • Nonconsensual content misuse
                    • Cloud storage exposure risks
                    • The need for clearer AI data-handling standards

                    Thoughts on how AI platforms should strengthen privacy controls?
                    💬 Join the discussion
                    👍 Boost & Follow for more insights

                    Source: expressvpn.com/blog/magicedit-

                    Popular AI Generator Exposed Over One Million Images Including DeepFakes and Nudify Face Swaps

                    Alt...Popular AI Generator Exposed Over One Million Images Including DeepFakes and Nudify Face Swaps

                      AodeRelay boosted

                      [?]Bupu » 🌐
                      @bupu@lgbtqia.space

                      There's a new government of Canada official petition to bring the same rights to likeness that Denmark recently passed to protect people from their body or voice being used by AI without their consent.

                      If you're Canadian, sign and share!

                      ourcommons.ca/petitions/en/Pet

                        AodeRelay boosted

                        [?]Renatomancer » 🌐
                        @Renatomancer@vmst.io

                        3 ★ 4 ↺

                        [?]Anthony » 🌐
                        @abucci@buc.ci

                        The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
                        From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.

                        In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.


                          AodeRelay boosted

                          [?]Wulfy—Speaker to the machines » 🌐
                          @n_dimension@infosec.exchange

                          Critical reasoning vs Cognitive Delegation

                          Old School Focus:

                          Building internal cognitive capabilities and managing cognitive load independently.

                          Cognitive Delegation Focus:

                          Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

                          We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

                          3/3

                          A large table comparing unassisted critical reasoning vs "Cognitive Delegation", leveraging AI for higher order thinking.

                          Alt...A large table comparing unassisted critical reasoning vs "Cognitive Delegation", leveraging AI for higher order thinking.