buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #bigdata

AodeRelay boosted

[?]didleth 🇵🇱 🌈 🇺🇦 🇪🇺 ⚡ » 🌐
@didleth@mastodon.social

USA chcą nie tylko 5letniej historii socialmediów od przyjezdnych, ale też dostępu do europejskich baz danych biometrycznych od UE. Zapraszam do lektury mojego artykułu w oko.press:

oko.press/usa-chca-naszych-dan

[?]AnarchoNinaAnalyzes » 🌐
@AnarchoNinaAnalyzes@social.treehouse.systems

If you want to understand the horrifying future these tech billionaire nazis have in store for humanity, it helps to understand what parts of our "glorious past" as a species they're looking to recreate. This interview with University of Illinois Urbana-Champaign information sciences and media studies professor Anita Say Chan talks a bit about the history of data and eugenics, what it means that guys like Elon Musk and Peter Thiel (along with most of Silicon Valley's movers and shakers) are devout believers in eugenicist philosophies, and how that's reflected in their efforts to reshape society in horrifying, and objectively fascist ways. In terms of our discussion about the uber rich fascists who heavily influence the Trump regime they bought and paid for, I'm mostly including this here to point out that these guys have gone on record in support of a lot of openly fascist shit and for the most part, your media pretends we don't already know these guys are nazis; it's not a secret, it's just not "worth mentioning" when media orgs are fawning over technofascist billionaires destroying our planet and our lives.

motherjones.com/politics/2025/

Eugenics Isn’t Dead—It’s Thriving in Tech

"Big Tech successors like Musk and PayPal billionaire-turned-arms dealer Peter Thiel have overtly promoted fraudulent race science, with Musk amplifying users on X who argue that people of European descent are biologically superior. In response to another user’s deleted post suggesting that students at historically Black institutions have lower IQs, Musk posted, “It will take an airplane crashing and killing hundreds of people for them to change this crazy policy of DIE”—diversity, equity, and inclusion, misspelled. In 2016, Thiel buddied up to a prominent white nationalist, and, the same year, was said by a Stanford dorm-mate to have complimented South Africa’s “economically sound” system of racial apartheid."

    AodeRelay boosted

    [?]Pseudonymous :antiverified: » 🌐
    @VictimOfSimony@infosec.exchange

    [?]Thom » 🌐
    @thom@swiss.social

    RE: ec.social-network.europa.eu/@E

    "In Europa wird kein Unternehmen mit der Verletzung unserer Grundrechte Geld verdienen."

    Da bin ich mal gespannt!

    AodeRelay boosted

    [?]European Commission » 🌐
    @EUCommission@ec.social-network.europa.eu

    We are opening an investigation into Grok because we believe that X may have breached the DSA.

    We have seen over the last weeks and months antisemitic content, non-consensual deepfakes of women, and child sexual abuse material.

    In Europe, no company will make money by violating our fundamental rights.

    More: link.europa.eu/Fh8h84

      AodeRelay boosted

      [?]Script Kiddie » 🌐
      @scriptkiddie@anonsys.net

      AodeRelay boosted

      [?]PKs Powerfromspace1 » 🌐
      @Powerfromspace1@mstdn.social

      In The Lord of the Rings, the Palantír always served power.

      Now look at Palantir Technologies and ask yourself, did we miss the point of the story… or follow it a bit too closely?

      If Tolkien was warning us, why does this feel so familiar?

      bsky.app/profile/jerradchristi

        0 ★ 0 ↺

        [?]Anthony » 🌐
        @abucci@buc.ci

        Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

        Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

        All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


          AodeRelay boosted

          [?]Dickenhobelix » 🌐
          @dickenhobelix@chaos.social

          Ich mache mal ein bisschen Werbung in eigener Sache und für meinen Arbeitgeber: falls du Interesse haben solltest, eine im Bereich / / für die bei den Stuttgarter Straßenbahnen zu schreiben, komm gerne auf mich zu und wir können gemeinsam über die Details sprechen.

          Boost welcome

            [?]Kevin Karhan :verified: » 🌐
            @kkarhan@infosec.space

            AI, Enshittification, Rant [SENSITIVE CONTENT]

            @robinsyl +9001%

            At best "" is just in a different packaging, at most it's that gets curbstomped by & and more often than not it's that literally 'es entire businesses because the assholes than run the Trillion-Dollar refuse to adhere to basic as standardized decades ago with robots.txt

            • At one employer I basically had to block , , , & their suppliers just to stop them from constantly DDoSing clients and fucking up our billing of said clients!

            Also is part of the problem, not the solution…

              AodeRelay boosted

              [?]Mark » 🌐
              @paka@mastodon.scot

              is control: what we learned from a year investigating the ’s ties to big

              Our reporting revealed a symbiotic relationship between the and – with implications for the future of

              theguardian.com/world/2025/dec

              #1984

                AodeRelay boosted

                [?]Pseudonymous :antiverified: » 🌐
                @VictimOfSimony@infosec.exchange

                AodeRelay boosted

                [?]Pseudonymous :antiverified: » 🌐
                @VictimOfSimony@infosec.exchange

                AodeRelay boosted

                [?]JL Johnson :veri_mast: » 🌐
                @User47@vmst.io

                Absolutely, 100%, no way, in hell.

                I’ll take a 1960s Roper refrigerator over this overpriced, tech-ridden garbage. The sales guy approached me as I was laughing at it and mentioned Samsung has decided to start running ads on them. Folks who fell for the scam can now have ads magically appear in their kitchens.

                Stainless steel side-by-side Samsung refrigerator with a gigantic iPad, looking thing covering almost the entire entirety of one of the half panels. It’s already showing an error message.

                Alt...Stainless steel side-by-side Samsung refrigerator with a gigantic iPad, looking thing covering almost the entire entirety of one of the half panels. It’s already showing an error message.

                Another side-by-side only this one appears to have two giant monitors. And advertises AI vision to suggest recipes based on what is already in the refrigerator that it’s spying on and I’m reporting back to advertisers.

                Alt...Another side-by-side only this one appears to have two giant monitors. And advertises AI vision to suggest recipes based on what is already in the refrigerator that it’s spying on and I’m reporting back to advertisers.

                  AodeRelay boosted

                  [?]Mark » 🌐
                  @paka@mastodon.scot

                  Once this infrastructure exists, mission creep is inevitable.

                  - What starts as ‘voluntary’ becomes mandatory
                  - A system that is just for workers expands to everyone, including children

                  action.openrightsgroup.org/tel

                  [2/2]

                    AodeRelay boosted

                    [?]Script Kiddie » 🌐
                    @scriptkiddie@anonsys.net

                    AodeRelay boosted

                    [?]Script Kiddie » 🌐
                    @scriptkiddie@anonsys.net

                    How We Volunteered for Digital Slavery

                    Once upon a time, software was free — not in the “free trial” sense, but actually free.
                    It lived in the halls of universities, passed between curious minds on floppy disks, shared in the spirit of collaboration and discovery. Hardware was the business; software was the art.

                    Then came a young man named Bill Gates. Fueled by BASIC and caffeine, he learned to code thanks to that culture of openness — and then, he turned freedom into monopoly. Gates dropped out of Harvard, founded Microsoft, and declared that software would no longer be shared; it would be sold. The age of commercial code had begun.

                    Microsoft’s first weapon was MS-DOS, a text-based operating system that quietly took over the newborn PC world through IBM’s early machines. Then came Windows, the glossy illusion — a friendly graphical interface that promised empowerment while locking users into Microsoft’s walled garden.

                    Like a corporate version of The Godfather, Microsoft “made offers” competitors couldn’t refuse. DR-DOS? Made intentionally incompatible. Linux? Targeted in internal memos. Internet Explorer? Forced onto every desktop “for your convenience.” The result: a near-total monopoly achieved not by innovation alone, but by control, coercion, and corporate muscle.

                    Fast forward to today. Microsoft’s empire no longer needs to hide its power — it runs your computer, your documents, your identity. Telemetry, the friendly name for mass surveillance, quietly records how you use your device. With Windows 11, you can’t even log in without a Microsoft account — the digital equivalent of Big Brother’s stamp of approval. Imagine Orwell’s 1984 rewritten for the 21st century: Winston Smith doesn’t fear the telescreen anymore. He bought it, installed it, and clicked “I agree.” Even students are being tracked through Microsoft Office. Their learning curves analyzed. Their data stored. Their digital lives quantified — all under the illusion of “productivity.”

                    Steve Ballmer once called Linux “a cancer.” But history, ever fond of irony, flipped the metaphor: Microsoft became the cancer instead — feeding on the open-source world it once despised. Today, Windows happily integrates Linux as a feature, commodifying the very freedom it tried to destroy. Microsoft didn’t defeat open source; it absorbed it. Like the Borg. Like the Matrix. Like capitalism always does.

                    Meanwhile, the cost of this corporate control extends far beyond money. Windows 11’s unnecessary hardware requirements have already forced millions of functional computers into landfills — a quiet environmental catastrophe disguised as an “upgrade.” The planet pays for your shiny new taskbar.

                    The illusion of control doesn’t stop at your desktop. As Edward Snowden revealed, Microsoft has shared user data with the NSA — turning your operating system into a global surveillance node. When the International Criminal Court made decisions the U.S. didn’t like, its staff found their Microsoft email access revoked. That’s not science fiction. That’s administrative reality. And it could happen to anyone. We once feared governments might use technology to control us. Instead, we let corporations do it — and we pay a monthly subscription for the privilege.

                    But not all hope is lost. There’s still the red pill — Linux. It’s not perfect, but it’s free, open, and community-driven. No forced logins, no corporate backdoors, no hidden surveillance. Just software built by people, for people. Switching to Linux isn’t rebellion for rebellion’s sake. It’s a vote — for transparency, for sustainability, for democracy. The blue screen of illusion can still be escaped.

                    Location: Matrix

                      AodeRelay boosted

                      [?]Pseudonymous :antiverified: » 🌐
                      @VictimOfSimony@infosec.exchange

                      [?]Script Kiddie » 🌐
                      @scriptkiddie@anonsys.net

                      Bill Gates is listening ... 😱☢️⚠️

                      Data protection experts criticize the extensive telemetry data that collects from every , which provides deep insight into their and usage.

                      Bill Gates is listening ... 😱☢️⚠️

                      Alt...Bill Gates is listening ... 😱☢️⚠️

                      Location: Matrix

                        AodeRelay boosted

                        [?]Kevin Karhan :verified: » 🌐
                        @kkarhan@infosec.space

                        And yes, C-level are for the most part a hinderance in any as they destroy R&D for "making quarterly number go up" then pocket the and leave till the shitshow inevitably crashes...

                        I mean, look at the shitshow is now:

                        • Gutted and
                        • Worst in Corporate History ( !)
                        • All their products are either only riding on the decreasing momentum of whilst the overall quality doesn't improve, and in some cases even get worse. (See Anti-Tick - Collars of being developed solely for the market (cuz they shuttered the division of the subsidiary ) thus making them completely ineffective as American and European Ticks ain't the same species!

                          AodeRelay boosted

                          [?]didleth 🇵🇱 🌈 🇺🇦 🇪🇺 ⚡ » 🌐
                          @didleth@mastodon.social

                          Pamiętacie, jak wczoraj po ogłoszeniu współpracy między MON i Palantirem narzekałam, że w mediach są głównie same bezrefleksyjne laurki? No to już nie narzekam ;)
                          Polecam tekst Anny Wittenberg na łamach WNP
                          wnp.pl/bezpieczenstwo/palantir

                            AodeRelay boosted

                            [?]Wulfy—Speaker to the machines » 🌐
                            @n_dimension@infosec.exchange

                            For well over 30 years, I have been sounding like a broken record;

                            is super important, because we are only one election away from an regime that will use to persecute people.

                            Each time normies were dismissing my point as alarmist and hysterical, and here we are;

                            “ICE is buying millions of people’s geolocations and weaponizing all government data toward immigration,” Guariglia said.

                            Source; nyunews.com/culture/iequity/20

                              AodeRelay boosted

                              [?]gtbarry » 🌐
                              @gtbarry@mastodon.social

                              TikTok Won’t Say If It’s Giving ICE Your Data

                              TikTok has repeatedly declined to answer questions about whether it has shared or is sharing private user information with the Department of Homeland Security or Immigrations and Customs Enforcement (ICE). The policy changes, combined with the company’s silence about them, leave open the possibility that it could do so if asked.

                              forbes.com/sites/emilybaker-wh

                                3 ★ 5 ↺
                                Kevin Davy boosted

                                [?]Anthony » 🌐
                                @abucci@buc.ci

                                We in the US are living in a eugenic modernity, by the way, when the putative head of "Health and Human Services" is making the kinds of statements he makes about autistic people. This is not just an anti-vaccination meme; it's an attempt to subordinate an entire class of people, suggesting they are subhuman for being who they are. This is a eugenic move. One has to wonder whether the "human services" people in HHS imagine themselves providing has to do with "improving the human stock" of the nation, the services not being provided to humans but instead having humans as an output.

                                Rather than get mired in the thought-terminating arguments around political parties or political factions, though, I think we'd do well to reflect on what sorts of other ways of thinking feed into this one: the measured life; standardized testing; the internet of things (sensors); tracking apps of various kinds; electronic health records; data science as a profession and Big Data generally; predictive modeling; generative AI and other optimization-oriented or productivity-promising technology. All of these function to render life as an object of knowledge in one way or another. All of them trace their origins through eugenics and the patterns of thought that led to it, and all of them threaten to enable and enhance further eugenic thinking. This is not to say these things are always all bad; this is meant to be a reflection on what exactly they're for.

                                Why read the number of steps your FitBit told you you took today, unless there were some sense in which you want your future self to be better than your present self? It's not an accident that this is called "physical fitness", "fitness" being the Darwinian concept describing which organisms should survive. Why subject children to standardized testing unless there were some belief it made them better students? To what end tends to be left out. Why adopt a technology meant to improve productivity, unless you're of the belief that improvement (optimization) were even possible?

                                Generally speaking, if one is able to bring oneself to believe that a human being is made better by a data-informed technical intervention, isn't one playing the same game as these anti-autism anti-vaxxers, just with different terminology? If your answer to this provocation is that your data is better than theirs or that you're more aligned with reality than they are--some variation of "the science is on our side"--you've ceded the territory: this is more of the same optimization logic that brought us to this point to begin with. I think we have no choice but to do better than this.

                                That's my reflection anyway.


                                  2 ★ 3 ↺
                                  emenel boosted

                                  [?]Anthony » 🌐
                                  @abucci@buc.ci

                                  Looks like a timely read:

                                  Predatory Data

                                  Eugenics in Big Tech and Our Fight for an Independent Future
                                  https://bookshop.org/p/books/predatory-data-eugenics-in-big-tech-and-our-fight-for-an-independent-future-anita-say-chan/21312207

                                  There's a nearly straight line from 20th century eugenics to 21st century big data and data science. Google, the bastion of big data, was founded by two Stanford graduate students; Stanford was founded by a eugenicist and instituted eugenics principles. Francis Galton--inventor of the regression analysis that forms the backbone of data science--was "hot or notting" London with a counter hidden in his pocket long before Harvard-age Zuckerberg recuperated the same with the favorite quantification technology of our day, computers.

                                  "The measured life" is a eugenics concept. All these doohickeys that collect data with the promise of making your body a bit more "fit"? Eugenicist in origin. Eugenics is about "optimizing" the physical "fitness" of people. Apps that help you learn, make you more mentally "fit"? Also have origins in eugenics. Eugenics is also about "optimizing" the mental "fitness" of people. Hence the obsession with IQ.

                                  This isn't to say you shouldn't take care of your body and mind in whichever ways you want. I do think it's important, though, to periodically reflect on, and ask yourself hard questions about, what's driving those efforts and what the goals really are. Part of understanding why eugenics thinking is resurging so hard and fast in the US is understanding its roots, where that type of thinking comes from. It's also important to reflect on where the apps and devices you use to achieve these goals come from. How many come directly or indirectly from Stanford, which was built by eugenicists to achieve eugenic goals, and its offshoots?

                                  Trump and Musk are literally repeating themes from Francis Galton's eugenics out in the open now. They're confident they can get away with it without pushback because the ground was laid long ago. But eugenics didn't suddenly become bad again because coarse people started saying the quiet part out loud. It's always been bad thinking, bad science, and bad morality.


                                    8 ★ 4 ↺
                                    Electrojcr boosted

                                    [?]Anthony » 🌐
                                    @abucci@buc.ci

                                    Here's a hot take on Microsoft Recall: it's an attempt to create a new data source to exploit, because the internet as a data source has been squeezed of most of its value to large-language-model-based AI and there is no other ready-to-use, large-scale, human-generated data source. Imagine millions, billions of people generating data every time they touch their computers; that's a big data source with some amount of built-in human curation. High quality for AI.

                                    I've written before on here about my favorite metaphor, eating your seed corn ( https://buc.ci/abucci/p/1705679109.757852 ), but I also think there's a decent analogy with peak oil as well. Microsoft Recall is the tar sands and oil shale of the "data is the new oil" era. The internet had the easy fields; now we're moving on to the dirty, dangerous, environment-destroying ones. If the pattern follows that of oil, wells will be drilled closer and closer together over time to slurp out value as rapidly as possible at the expense of long-term field health.