buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #dotcom

AodeRelay boosted

[?]Jason Tubnor 🇦🇺 » 🌐
@Tubsta@soc.feditime.com

I lost track of how much equipment I used SETI@Home on for burn-in during the #dotcom era. It certainly helped my status on the leader board.

#UltraSPARC #Alpha , all the good stuff back then when computing was fun #setiathome https://news.berkeley.edu/2026/01/12/for-21-years-enthusiasts-used-their-home-computers-to-search-for-et-uc-berkeley-scientists-are-homing-in-on-100-signals-they-found/

    AodeRelay boosted

    [?]⚯ Michel de Cryptadamus ⚯ » 🌐
    @cryptadamist@universeodon.com

    kind of incredible that if you bought stock in ¹ / 25 years ago, well before it turned itself into the 500 lb gorilla of the “borrow money to buy bitcoin” industry but only shortly before Michael 's first² era accounting fraud was uncovered and he set a world record for “most money lost in a single day”, you’d currently still have lost money.

    ¹ sorry i just can’t call it the new name (“Strategy”) with a straight face

    ² there’s been a few accounting frauds at this point, most recently a $50 million tax fraud settlement last year. almost makes you think this guy might not be on the up and up.

    chart showing that the price of MSTR is now below the maximum price it reached in the dotcom bubble

    Alt...chart showing that the price of MSTR is now below the maximum price it reached in the dotcom bubble

      3 ★ 4 ↺

      [?]Anthony » 🌐
      @abucci@buc.ci

      Honestly, besides all the basic economic reasons that this cannot last, the application area is one math theorem away from imploding.

      All it'd take is one clever math result demonstrating you don't need absolutely gigantic neural networks trained on mind-bogglingly-huge datasets to achieve the AI goals of most companies, and NVIDIA's hardware dominance evaporates. Why would you spend thousands or tens of thousands of dollars on a GPU that uses 300 Watts of power when you could achieve the same thing with an ASIC or FPGA that uses 3 Watts? This is already true for many applications, but apparently it hasn't been widely realized yet. It'll be hard to ignore if/when it becomes true for the vast majority of applications. Which it could.