buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #cs

[?]65dBnoise » 🌐
@65dBnoise@mastodon.social

Code is a liability, not an infinitely reproducible machine that requires no labor inputs to operate. It is a brittle machine that eventually "wears out" and needs a top-to-bottom refactoring to keep being in good working order.

It's code's capabilities that are assets.

«"Writing code" is about making code that runs well. "Software engineering" is about making code that fails well»

And then came .

@pluralistic nails it:
pluralistic.net/2026/01/06/100

    AodeRelay boosted

    [?]petersuber » 🌐
    @petersuber@fediscience.org

    Kudos to the @ACM:

    "Beginning January 2026, all ACM publications and related artifacts in the ACM Digital Library will be made open access."
    dl.acm.org/openaccess

      AodeRelay boosted

      [?]Oliver D. Reithmaier » 🌐
      @odr_k4tana@infosec.exchange

      LMAO ACM didn't get it. But hey, good luck finding people who want to pay for this. Not gonna be me.

        AodeRelay boosted

        [?]Kevin Karhan :verified: » 🌐
        @kkarhan@infosec.space

        @green that's the neat part - NOONE DID!

        • Just like classes: I (and most others) never had it.

        And in 2019 I say the results of that: A person certified ready for university but unable to use a mouse and a keyboard.

        • Not Linux. Not a Mac. Just not knowing how to use a Desktop PC, turn it on, type in letters and click on things.

        And that person wasn't some impoverished barely got started in school in time kinda person with run-down clothing and barely able to tie their laces! But someone able to afford the latest iPhone and being decked out in drip that was easily 4 digits.

          3 ★ 3 ↺

          [?]Anthony » 🌐
          @abucci@buc.ci

          This is a long time coming. I've been posting about the decline of arXiv's CS category for a long time now, and even had a few conversations with someone I know who works there about it. Personally, I think the slop started in 2018--prior to generative AI slop--when the CS category at arXiv began the unsustainable exponential growth in submissions that has continued till today. An increasing number of what amounted to corporate whitepapers and other marketing materials were being posted on arXiv to give them the appearance of scientific credibility. There was a fairly clear arXiv-to-Nature pipeline. Citation counts were pumped as some of the scientometric services count arXiv "articles" as citations, and some researchers adopted the bad scholarly habit of citing arXiv preprints instead of the final publication. It was and still is a mess. My understanding is that arXiv was meant as a place for people to put high-quality but pre-publication articles, but at least in the CS category it's drifted quite far from that.

          I gather they've finally taken this measure because of the preponderance AI-generated slop, but with any luck these other issues will improve too. The arXiv press release states “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv” so it does sound like they are acknowledging the other problems and intend to enforce their rules more strictly in the future.

          "arXiv says it will no longer accept Computer Science papers that are still under review due to the wave of AI-generated ones it has received."
          From https://infosec.exchange/users/josephcox/statuses/115486903712973154


            AodeRelay boosted

            [?]petersuber » 🌐
            @petersuber@fediscience.org

            Update. Here's how is dealing with a similar problem in computer science.
            blog.arxiv.org/2025/10/31/atte

            "Before being considered for submission to arXiv’s category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review…In the past few years, arXiv has been flooded with papers. Generative / have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category."

              4 ★ 4 ↺

              [?]Anthony » 🌐
              @abucci@buc.ci

              Just to clarify the point I was making yesterday about arXiv, below I've included a plot from arXiv's own stats page https://info.arxiv.org/help/stats/2021_by_area/index.html . The image contains two charts side-by-side. The chart on the left is a stacked area chart tracking the number of submissions to each of several arXiv categories through time, from 1991 to 2021. I obtained this screenshot today; arXiv's site, at time of writing, says the chart had been updated 3 January 2022. The caption to this plot on the arXiv page I linked has more detail about it.

              What you're seeing here is that for most categories, there is a linear increase in the number of submissions to the category year-over-year up until the end of the data series in 2021. Computer science is dramatically different: its increase looks exponential, and it looks like its rate of increase may have accelerated circa 2017. The chart on the right, which is the same data shown proportional instead of as raw counts, suggests computer science might be "eating" mathematics starting around 2017.

              2017 is around when generative AI papers started to appear in large quantities. There was a significant advance in machine learning published around 2018 but known before then that made deep learning significantly more effective. Tech companies were already pushing this technology. (the / maker) was founded in 2015; GPT-2 was released in early 2019. arXiv's charts don't show this, but I suspect these factors play a role in the seeming phase shift in their CS submissions in 2017.

              We don't know what 2022 and 2023 would look like on a chart like this but I expect the exponential increase will have continued and possibly accelerated.

              In any case, this trend is extremely concerning. The exponential increase in number of submissions to what is supposed to be an academic pre-print service is not reasonable. There hasn't been an exponential increase in the number of computer scientists, nor in research funding, nor in research labs, nor in the output-per-person of each scientist. Furthermore, these new submissions threaten to completely swamp all other material: before long computer science submissions will dwarf those of all over fields combined; since this chart stops at 2021 they may have already! arXiv's graphs do not break down the CS submissions by subtopic, but I suspect they are in the machine learning/generative AI/LLM space and that submissions on these topics dwarf the other subdisciplines of computer science. Finally, to the extent that arXiv has quality controls in place for its archive, these can't possibly keep up with an exponentially-increasing rate of submissions. They will eventually fail if they haven't already (as I suggested in a previous post I think there are signs that their standards are slipping; perhaps that started circa 2017 and that's partly why the rate of submissions accelerated then?).


              Description is in the body of the post.

              Alt...Description is in the body of the post.

                14 ★ 8 ↺

                [?]Anthony » 🌐
                @abucci@buc.ci

                I am legitimately saddened by how many graduate students, postdocs, and university professors altered their research direction to encompass large language models because of the attention they've been receiving in other areas of life outside of academia. Whether it's to critique them, enhance them, use them, or something else, I view it as a sign that as a research discipline, computer science is unhealthy. We call them "research disciplines" because they're meant to be disciplined about this sort of buffeting.

                Diversity of ideas is important. Sticking with a research program long enough to see it through is also important. Changing up what you're working on every time Silicon Valley ejects a new artifact that gets news coverage endangers both of those values.

                And holy hell is the monotony boring. Computer science is an interesting, sprawling field with a lot going on! Let's keep it that way!

                I'm aware that over the last year or so I've been a critic of hype so I too am reacting to it. Lately I've been considering changing that up.