buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #bigdata

AodeRelay boosted

[?]heise online » 🌐
@heiseonline@social.heise.de

AodeRelay boosted

[?]Eugene » 🌐
@datastory@mstdn.ca

🏡 Here is the data analysis for Calgary, Summer 2025. This chart shows the relationship between vegetation density (NDVI) and Land Surface Temperature (LST).

🔥 The data reveals a critical "tipping point": vegetation only starts effectively cooling the environment once it reaches a specific density threshold. Below this threshold (the left side of the curve), green spaces stay just as hot as the surrounding concrete.
Sparse or isolated trees don't act as air conditioners—they "burn" in the urban furnace right along with us.

❗ What does this mean for Calgary? Simply planting a few scattered trees isn't enough. To actually move the needle on temperature, we need dense, healthy green belts. Otherwise, it’s just a waste of water and resources.

🔗 Link to the research: datastory.org.ua/calgarys-summ

Urban Heat and Vegetation in Calgary.
Based on Median Land Surface Temperature and Median Normalized Difference Vegetation Index (Summer 2025 Composites)

Alt...Urban Heat and Vegetation in Calgary. Based on Median Land Surface Temperature and Median Normalized Difference Vegetation Index (Summer 2025 Composites)

    AodeRelay boosted

    [?]DoomsdaysCW » 🌐
    @DoomsdaysCW@kolektiva.social

    More coming to as residents complain about , bills: What to know

    Story by Jason Knowles, Maggie Green, February 17, 2026

    "Data centers are moving in. They power everything from to , but critics say they are noisy and can jack up your .

    Now, the I-Team and ABC News are finding that more than 3,000 data centers are already operating nationwide, with at least 1,000 more planned. Some are in the area."

    msn.com/en-us/news/us/more-dat

      AodeRelay boosted

      [?]Kevin Karhan :verified: » 🌐
      @kkarhan@infosec.space

      @themue @t3n "" "datenschutzkonform" machen ist so unmöglich wie - & -konform zu nutzen, denn verbietet letzteres.

      • Bei "" / ist es sogar noch schlimmer;
        • Bestenfalls ist es nen extrem langsames Frontend für
        • Schlimmstenfalls ist es unsicherer als nen unverschlüsselter USB-Stick mit allen Passwörtern drauf der im Pausenraum liegt.

      Die Nutzung von ist eine -Lösung vgl. und :

      • Auf der Suche nach einem Problem welches nicht technologischer Natur ist.

      youtube.com/watch?v=YQ_xWvX1n9

      AodeRelay boosted

      [?]PaulaToThePeople 😷 » 🌐
      @PaulaToThePeople@climatejustice.social

      What is surveillance-fascism?
      (a.k.a. what is the Fediverse missing?)

      * big data
      (collection of tons of private data used for profiling and authoritarian control)
      * attention harvesting
      (reducing attentionspan and causing addiction)
      * ragebaiting
      (hatred boosting alorithms & government botfarms)
      * hyper-capitalism
      (ads, more ads, profit, hidden ads, influencers, interrupting ads, plugs, camoflaged ads)
      * AI

        [?]Tino Eberl » 🌐
        @tinoeberl@mastodon.online

        Ah, möchte noch mehr .

        Google beschränkt den Funktionsumfang von für Nutzer ohne Anmeldung.

        Ohne Konto fehlen , , und Besuchszeiten. Auch weitere Ortsinformationen werden ausgeblendet und als beschränkte Ansicht markiert. Mit Anmeldung stehen alle Daten wieder bereit, während die Routenplanung weiterhin ohne Konto möglich bleibt.

        golem.de/news/googles-kartendi

          AodeRelay boosted

          [?]Kevin Karhan :verified: » 🌐
          @kkarhan@infosec.space

          [?]Autonomie und Solidarität » 🌐
          @autonomysolidarity@todon.eu

          Total Trust - Totales Vertrauen
          Wenn das die Gegenwart ist, wie sieht dann unsere Zukunft aus?

          "Was passiert, wenn der Schutz unserer missachtet wird? Wie umfassend sind die aus Big Data gewonnen Informationen über unsere Aktivitäten und Überzeugungen, Abneigungen, Vorlieben und Gewohnheiten? Lässt sich sicherstellen, dass diese Daten nicht in die falschen Hände geraten? Sind sie vielleicht schon in den falschen Händen?

          „Total Trust“ ist ein zutiefst beunruhigender und bewegender Film über die unheimliche Macht von und , über ihren Gebrauch und Missbrauch im öffentlichen wie im privaten Leben, über Zensur und Selbstzensur. Anhand eindringlicher Schicksale von Menschen in China, die überwacht, eingeschüchtert und sogar gefoltert wurden, erzählt „Total Trust“ von den Gefahren aktueller Technologien in den Händen einer ungezügelten Macht. Mit China als Spiegel schlägt der Film Alarm: Der zunehmende Einsatz von digitalen Überwachungstools ist längst ein globales Phänomen – auch in demokratisch geführten Ländern." (Pressetext)

          yt.artemislena.eu/watch?v=ZDBL

          AodeRelay boosted

          [?]didleth 🇵🇱 🌈 🇺🇦 🇪🇺 ⚡ » 🌐
          @didleth@mastodon.social

          USA chcą nie tylko 5letniej historii socialmediów od przyjezdnych, ale też dostępu do europejskich baz danych biometrycznych od UE. Zapraszam do lektury mojego artykułu w oko.press:

          oko.press/usa-chca-naszych-dan

          AodeRelay boosted

          [?]Pseudonymous :antiverified: » 🌐
          @VictimOfSimony@infosec.exchange

          AodeRelay boosted

          [?]PKs Powerfromspace1 » 🌐
          @Powerfromspace1@mstdn.social

          In The Lord of the Rings, the Palantír always served power.

          Now look at Palantir Technologies and ask yourself, did we miss the point of the story… or follow it a bit too closely?

          If Tolkien was warning us, why does this feel so familiar?

          bsky.app/profile/jerradchristi

            0 ★ 0 ↺

            [?]Anthony » 🌐
            @abucci@buc.ci

            Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

            Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

            All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.


              AodeRelay boosted

              [?]Dickenhobelix » 🌐
              @dickenhobelix@chaos.social

              Ich mache mal ein bisschen Werbung in eigener Sache und für meinen Arbeitgeber: falls du Interesse haben solltest, eine im Bereich / / für die bei den Stuttgarter Straßenbahnen zu schreiben, komm gerne auf mich zu und wir können gemeinsam über die Details sprechen.

              Boost welcome

                [?]Kevin Karhan :verified: » 🌐
                @kkarhan@infosec.space

                AI, Enshittification, Rant [SENSITIVE CONTENT]

                @robinsyl +9001%

                At best "" is just in a different packaging, at most it's that gets curbstomped by & and more often than not it's that literally 'es entire businesses because the assholes than run the Trillion-Dollar refuse to adhere to basic as standardized decades ago with robots.txt

                • At one employer I basically had to block , , , & their suppliers just to stop them from constantly DDoSing clients and fucking up our billing of said clients!

                Also is part of the problem, not the solution…

                  AodeRelay boosted

                  [?]Mark » 🌐
                  @paka@mastodon.scot

                  is control: what we learned from a year investigating the ’s ties to big

                  Our reporting revealed a symbiotic relationship between the and – with implications for the future of

                  theguardian.com/world/2025/dec

                  #1984

                    AodeRelay boosted

                    [?]Pseudonymous :antiverified: » 🌐
                    @VictimOfSimony@infosec.exchange

                    AodeRelay boosted

                    [?]Max Resing » 🌐
                    @resingm@infosec.exchange

                    May I present to you a lexicographically sortable alternative? A simple and intuitive design is presented in ULID. It embeds a timestamp in the first 48 bits, followed by a random suffix of 80 bits.

                      AodeRelay boosted

                      [?]Pseudonymous :antiverified: » 🌐
                      @VictimOfSimony@infosec.exchange

                      AodeRelay boosted

                      [?]JL Johnson :veri_mast: » 🌐
                      @User47@vmst.io

                      Absolutely, 100%, no way, in hell.

                      I’ll take a 1960s Roper refrigerator over this overpriced, tech-ridden garbage. The sales guy approached me as I was laughing at it and mentioned Samsung has decided to start running ads on them. Folks who fell for the scam can now have ads magically appear in their kitchens.

                      Stainless steel side-by-side Samsung refrigerator with a gigantic iPad, looking thing covering almost the entire entirety of one of the half panels. It’s already showing an error message.

                      Alt...Stainless steel side-by-side Samsung refrigerator with a gigantic iPad, looking thing covering almost the entire entirety of one of the half panels. It’s already showing an error message.

                      Another side-by-side only this one appears to have two giant monitors. And advertises AI vision to suggest recipes based on what is already in the refrigerator that it’s spying on and I’m reporting back to advertisers.

                      Alt...Another side-by-side only this one appears to have two giant monitors. And advertises AI vision to suggest recipes based on what is already in the refrigerator that it’s spying on and I’m reporting back to advertisers.

                        3 ★ 5 ↺
                        Kevin Davy boosted

                        [?]Anthony » 🌐
                        @abucci@buc.ci

                        We in the US are living in a eugenic modernity, by the way, when the putative head of "Health and Human Services" is making the kinds of statements he makes about autistic people. This is not just an anti-vaccination meme; it's an attempt to subordinate an entire class of people, suggesting they are subhuman for being who they are. This is a eugenic move. One has to wonder whether the "human services" people in HHS imagine themselves providing has to do with "improving the human stock" of the nation, the services not being provided to humans but instead having humans as an output.

                        Rather than get mired in the thought-terminating arguments around political parties or political factions, though, I think we'd do well to reflect on what sorts of other ways of thinking feed into this one: the measured life; standardized testing; the internet of things (sensors); tracking apps of various kinds; electronic health records; data science as a profession and Big Data generally; predictive modeling; generative AI and other optimization-oriented or productivity-promising technology. All of these function to render life as an object of knowledge in one way or another. All of them trace their origins through eugenics and the patterns of thought that led to it, and all of them threaten to enable and enhance further eugenic thinking. This is not to say these things are always all bad; this is meant to be a reflection on what exactly they're for.

                        Why read the number of steps your FitBit told you you took today, unless there were some sense in which you want your future self to be better than your present self? It's not an accident that this is called "physical fitness", "fitness" being the Darwinian concept describing which organisms should survive. Why subject children to standardized testing unless there were some belief it made them better students? To what end tends to be left out. Why adopt a technology meant to improve productivity, unless you're of the belief that improvement (optimization) were even possible?

                        Generally speaking, if one is able to bring oneself to believe that a human being is made better by a data-informed technical intervention, isn't one playing the same game as these anti-autism anti-vaxxers, just with different terminology? If your answer to this provocation is that your data is better than theirs or that you're more aligned with reality than they are--some variation of "the science is on our side"--you've ceded the territory: this is more of the same optimization logic that brought us to this point to begin with. I think we have no choice but to do better than this.

                        That's my reflection anyway.


                          2 ★ 3 ↺
                          emenel boosted

                          [?]Anthony » 🌐
                          @abucci@buc.ci

                          Looks like a timely read:

                          Predatory Data

                          Eugenics in Big Tech and Our Fight for an Independent Future
                          https://bookshop.org/p/books/predatory-data-eugenics-in-big-tech-and-our-fight-for-an-independent-future-anita-say-chan/21312207

                          There's a nearly straight line from 20th century eugenics to 21st century big data and data science. Google, the bastion of big data, was founded by two Stanford graduate students; Stanford was founded by a eugenicist and instituted eugenics principles. Francis Galton--inventor of the regression analysis that forms the backbone of data science--was "hot or notting" London with a counter hidden in his pocket long before Harvard-age Zuckerberg recuperated the same with the favorite quantification technology of our day, computers.

                          "The measured life" is a eugenics concept. All these doohickeys that collect data with the promise of making your body a bit more "fit"? Eugenicist in origin. Eugenics is about "optimizing" the physical "fitness" of people. Apps that help you learn, make you more mentally "fit"? Also have origins in eugenics. Eugenics is also about "optimizing" the mental "fitness" of people. Hence the obsession with IQ.

                          This isn't to say you shouldn't take care of your body and mind in whichever ways you want. I do think it's important, though, to periodically reflect on, and ask yourself hard questions about, what's driving those efforts and what the goals really are. Part of understanding why eugenics thinking is resurging so hard and fast in the US is understanding its roots, where that type of thinking comes from. It's also important to reflect on where the apps and devices you use to achieve these goals come from. How many come directly or indirectly from Stanford, which was built by eugenicists to achieve eugenic goals, and its offshoots?

                          Trump and Musk are literally repeating themes from Francis Galton's eugenics out in the open now. They're confident they can get away with it without pushback because the ground was laid long ago. But eugenics didn't suddenly become bad again because coarse people started saying the quiet part out loud. It's always been bad thinking, bad science, and bad morality.


                            8 ★ 4 ↺
                            Electrojcr boosted

                            [?]Anthony » 🌐
                            @abucci@buc.ci

                            Here's a hot take on Microsoft Recall: it's an attempt to create a new data source to exploit, because the internet as a data source has been squeezed of most of its value to large-language-model-based AI and there is no other ready-to-use, large-scale, human-generated data source. Imagine millions, billions of people generating data every time they touch their computers; that's a big data source with some amount of built-in human curation. High quality for AI.

                            I've written before on here about my favorite metaphor, eating your seed corn ( https://buc.ci/abucci/p/1705679109.757852 ), but I also think there's a decent analogy with peak oil as well. Microsoft Recall is the tar sands and oil shale of the "data is the new oil" era. The internet had the easy fields; now we're moving on to the dirty, dangerous, environment-destroying ones. If the pattern follows that of oil, wells will be drilled closer and closer together over time to slurp out value as rapidly as possible at the expense of long-term field health.