buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #hype

AodeRelay boosted

[?]Freezenet » 🌐
@freezenet@noc.social

Like Clockwork: AI Hype Machine Spins Up After Bubble Cracks Form

It was something I suspected would happen, but the AI hype machine is roaring at full speed once again.

freezenet.ca/like-clockwork-ai

    AodeRelay boosted

    [?]RememberUsAlways » 🌐
    @RememberUsAlways@newsie.social

    It was cute when US corporate was pushing "Democrats want to " for a hot minute until the discussion landed in .
    Republicans and their loyalist agents online will up anything they can to distract from and their national security failures stacked on top of civil rights violations.

    !

      [?]65dBnoise » 🌐
      @65dBnoise@mastodon.social

      @gutenberg_org
      Yes, only they did not use .
      The paper itself doesn't even once mention the words "AI" or "Artificial Intelligence". Not once. And that's done purposefully by the researchers, who respect their own work and their colleagues/readers reading their paper. Their model used ModernBERT, not GPT or any other LLM.

      The use of vague, over-hyped marketing buzzwords like "AI" blurs the work of the researchers who went into great lengths in their paper to describe how they did it.

        AodeRelay boosted

        [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
        @rysiek@mstdn.social

        In unrelated news, Microsoft is asking Microsoft Windows users to uninstall a recent Microsoft Windows update, issued and published by Microsoft, because said update is breaking Microsoft Windows.
        windowscentral.com/microsoft/w

        Every single piece about this should be mentioning how Satya Nadella bragged how 30% of new Microslop code is AI-generated:
        cnbc.com/2025/04/29/satya-nade

        Not providing this context is journalistic malpractice.

          AodeRelay boosted

          [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
          @rysiek@mstdn.social

          Microslop's CEO is on a roll!

          Few weeks ago he begged us to stop using "slop" because it makes AIs sad. Days ago he complained "AI boom might falter" if we don't start using more spicy autocomplete. :blobeyes:

          Now he's begging developers to "do something useful" with lying machines, or they might lose the "social permission" to boil the planet. :blobcatgiggle:
          techradar.com/ai-platforms-ass

          What, is Microslop's Slopilot services not useful enough on their own? :blobcatthink:

            AodeRelay boosted

            [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
            @rysiek@mstdn.social

            This is interesting, but not for the obvious reasons:

            The era of Photoshop may be ending, as Adobe stocks take a battering
            finance.yahoo.com/news/era-pho

            tl;dr Adobe's stock is down because "AI" so Yahoo concludes this is the end of the "Photoshop era".

            What's interesting here is that "Photoshop era" is ending – according to Yahoo – not because users are turning away. It's because *investors* are.

            The "market" for software and services is a side-show. Only the stock market matters.

            1/2

              AodeRelay boosted

              [?]65dBnoise » 🌐
              @65dBnoise@mastodon.social

              How muddies the work of scientists (apparently) without their approval.

              's scientists have created a tool that uses Convolutional Neural Networks to validate Kepler and TESS exoplanet signals. The 28 author paper, iopscience.iop.org/article/10., a treatise of how to use NNs to reliably perform difficult classification tasks, describes in painstaking detail the process of preparing the data and extracting various features.

              𝐓𝐡𝐞𝐫𝐞 𝐢𝐬 𝐧𝐨 𝐦𝐞𝐧𝐭𝐢𝐨𝐧 𝐰𝐡𝐚𝐭𝐬𝐨𝐞𝐯𝐞𝐫 𝐨𝐟 𝐀𝐈 𝐢𝐧 𝐭𝐡𝐞 𝐩𝐚𝐩𝐞𝐫.

              But

              1/

                AodeRelay boosted

                [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
                @rysiek@mstdn.social

                AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
                irishtimes.com/business/2026/0

                Oh dear me are people not "adopting" lying autocomplete widely enough to keep the line going up? Oh noes. Nobody could have foreseen this! 🤯

                I cannot wait for the official announcement of a new programme, Adopt-an-AI. 🤣

                What an absolute tool. "The bubble might burst if you all don't help pump it!" 🤡

                  [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
                  @rysiek@mstdn.social

                  If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".

                  Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
                  nature.com/articles/s41586-024

                  Every "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.

                  Which is why they desperately seek out any and all human-created text out there.

                    AodeRelay boosted

                    [?]Michał "rysiek" Woźniak · 🇺🇦 » 🌐
                    @rysiek@mstdn.social

                    Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?

                    Yeah, about that:
                    theverge.com/tech/863209/meta-

                    But today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! 🤡

                      [?]Paul SomeoneElse » 🌐
                      @pkw@snac.d34d.net

                      I am bigoted against LLMs.
                      I know it and it is how I deal with the hype.
                      Meaning I can't constantly be reasonable and
                      figure out what is meant and the context of LLMs
                      and what people think about them.
                      It drains my brain power from other things.

                      And I have continually found nothing compelling.

                      Worse, I have typically found very frustrating
                      examples of people using very strong but implied
                      assumptions and using logic that depends utterly
                      on using blinders and ignoring reason.

                      Until the hype dies, I am not interested in them.

                      I am still interested in the old AI stuff like
                      for example path finding, NNs, and markov chains.


                        AodeRelay boosted

                        [?]Brian Greenberg :verified: » 🌐
                        @brian_greenberg@infosec.exchange

                        Stanford AI Experts Predict What Will Happen in 2026...
                        For the last few years, we obsessed over whether AI could do the thing. 2026 is when the grown-up questions take over:
                        ・ How well does it actually work?
                        ・ What does it really cost?
                        ・ And who benefits when the hype fades?

                        The shift is overdue. Expect fewer jaw-dropping demos and more uncomfortable audits of where AI truly boosts productivity and where it mostly boosts slide decks. Geopolitics will stop being a footnote and start shaping the stack. More countries will demand that their data, models, and compute stay on home soil. “Global by default” will quietly disappear. In science and medicine, the mood changes from wow to why. Getting the right answer is no longer enough. If a model is correct, people will want to know what inside it did the work. And in law and other high-stakes domains, the winners will not be the flashiest tools. They will be the ones that survive domain-specific scrutiny and function inside real workflows, not just pristine demos.

                        The AI era is not ending.
                        The experimentation phase is.

                        TL;DR
                        🧠 Evaluation replaces evangelism
                        ⚡ AI sovereignty accelerates
                        🎓 Dashboards track work impact
                        🔍 Open the black box

                        hai.stanford.edu/news/stanford

                          [?]Awet Tesfaiesus, MdB » 🌐
                          @AwetTesfaiesus@mastodon.social

                          Gestern las ich reichlich zu in meinem Feed. Tenor: es ist alles ein /eine Blase! Ich möchte (sicher) nicht in die Diskussion einsteigen, wie sehr das stimmt.

                          Ich möchte statt dessen erinnern: Blase oder nicht, die (auch aber nicht nur wirtschaftlichen) Effekte sind schon jetzt sehr real: für , für und Sprecher, für Entry Level Office Jobs! Wir stehen dort vor einem Verelendungstsunami.

                            AodeRelay boosted

                            [?]Foncu 🐞🌹 » 🌐
                            @foncu@neopaquita.es

                            La , una disciplina a la que he dedicado prácticamente 20 años de investigación, está siendo devorada por las lógicas capitalistas que afectan tantísimas otras cosas que funcionan mal en nuestra vida. El , la burbuja, la moda, el apetito insaciable o como lo llamamos en , el .

                            La IA se basa en la estadística, en identificar patrones en conjuntos de datos que ayudan a predecir unas variables en función de otras. Todo esto se consigue construyendo cálculos basados en combinaciones de los datos de entrada que a su vez resulten en una aproximación al valor que queremos predecir y a su vez definiendo función objetivo: una forma de evaluar lo lejos que está la predicción de la respuesta correcta, de forma que podamos variar ligeramente los cálculos para acercarnos cada vez más. Bien, pues si no tenemos cuidado, podríamos definir esa evaluación de forma que los cálculos acaben haciendo una prediccion trivial: la moda de la distribución, el valor que más se repite. Predecir siempre que en Buenaventura, Colombia está lloviendo garantiza acertar 260 de 365 días al año, pero hace que el modelo no sirva de nada porque en realidad no predice nada sino que repite la . Esto pasa también con las IAs generativas, desde las GANs a los modelos lingüísticos. Y el problema se hace peor si datos generados con IAs se usan para entrenar otras IAs, asegurando el colapso modal. La IA encuentra que puede optimizar desde la : si lo apuesto todo al valor que más recompensa me da, no tengo que preocuparme de ver más allá.

                              AodeRelay boosted

                              [?]Freezenet » 🌐
                              @freezenet@noc.social

                              Video: The Great AI Scam – How One Industry Fooled So Many for So Long

                              Welcome to episode 4 of my news video’s. Today’s topic is how the AI industry is pushing its overall scam on people.

                              freezenet.ca/video-the-great-a

                                AodeRelay boosted

                                [?]Kuketz-Blog 🛡 » 🌐
                                @kuketzblog@social.tchncs.de

                                Bundesminister für Digitales in seiner Rede: »Zum ersten Mal übertreffen Maschinen den Menschen in dem, was uns bisher einzigartig machte: unsere Intelligenz.«

                                Auweia. Taschenrechner »übertreffen« mich auch – trotzdem sind sie keine Mathematiker. KI ist Statistik: Vorhersagen, imitieren, plausibel klingen. Verstehen? Null. Bewusstsein? Null. Verantwortung? Null. Wer hier von »Intelligenz« spricht, zeigt vor allem, wie wenig er davon versteht.

                                bmds.bund.de/aktuelles/reden/d

                                /kuk

                                  AodeRelay boosted

                                  [?]LinuxNews.de » 🌐
                                  @linuxnews@social.anoxinon.de

                                  Ich bin froh wenn die endlich platzt.

                                  Vorteile:
                                  - Festplattenpreise gehen runter
                                  - Nervende KI Buttons überall verschwinden
                                  - Stomverbrauch sinkt
                                  - Deepfakes und AI Slop wird weniger
                                  - Kunst wird wieder Kunst
                                  - Finanzmarkt entspannt sich (AI Aktien sind aktuell der Sh*t)

                                  Nachteile: Menschen müssen wieder selbst denken…

                                    AodeRelay boosted

                                    [?]David Haigh » 🌐
                                    @leanlearnlead@mstdn.ca

                                    I took a course with Dr Blit. His opening statement was, "I just sold all of my tech stocks" and "AI is the new electricity" and "generative AI can do everything in descriptive analytics".

                                    That was all I needed to know that he was a hype artist on par with , , and .

                                    Try not to laugh too hard at the hyperbole.

                                    thestar.com/business/opinion/l

                                      4 ★ 2 ↺
                                      St. Chris boosted

                                      [?]Anthony » 🌐
                                      @abucci@buc.ci

                                      Software "agents" were a hype-y topic when I was a graduate student 25 years ago. I wrote one for a class. I feel like what's being called "agents" or "AI agents" these days are even less capable than what seemed possible a quarter of a century (1) ago when I was in school.

                                      What I thought then is still true today: to make something like a software agent legitimately useful for a lot of people would require a large amount of low-level grunt work and non-technical work (2) of the sort that the typical Silicon Valley company is unwilling to do. (3) The technology is the absolute easiest part of this task. Throwing a Bigger Computer at the problem leaves all those other pieces of work undone. It's like putting a bigger engine in a car with no wheels, hoping that'll make the car go.

                                      By the way companies and VCs, I'm available for contract work and have done due diligence research before if you ever want to stop wasting everyone's time and money!

                                      (1) Which we've been told repeatedly is essentially infinite time in the tech world.
                                      (2) Establishing semantic data standards and convincing a large enough number of people to implement them being an important component. LLMs do not magically develop protocols and solve all the ETL-style problems of translating among different ones. The Semantic Web didn't really stick for a lot of reasons, but one reason is that it's hard!
                                      (3) Back when I was still in the startup world I was asked several times by VCs to tell them what I thought about some new startup that claimed to be able to magically clean and fuse data. I think they're still very keen on investing in this style of magic, because it requires an intense amount of human labor, but I think where companies landed was invisibilizing low-paid workers in other countries and pretending a computer did the work they did. Which has also been happening for well over a quarter of a century.

                                        17 ★ 16 ↺

                                        [?]Anthony » 🌐
                                        @abucci@buc.ci

                                        You are not doing science research if you stuff an LLM into the critical path of your experiments. You are, instead, producing science-shaped artifacts with the same peripheral relationship to science that LLM text output has to truth.

                                        The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)

                                        If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)

                                        If you stick an LLM in the communication part, then you're putting yourself on the Retraction Watch list, not communicating.

                                        (1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.
                                        (2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.
                                        (3) Things are so dire that very few even seem to have the thought that this is something you should try to do.
                                        (4) Yes, you might discover something while watching the LLM glop, but that's you, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.

                                          4 ★ 4 ↺

                                          [?]Anthony » 🌐
                                          @abucci@buc.ci

                                          The other day I had another conversation in which someone said that made them more productive, and after I asked a few questions they admitted maybe 80% of the output is OK and they have to check and double check everything. Then when I asked the obvious followup, "wouldn't it be easier just to do it yourself from the beginning instead of having to put in all these safeguards and worry about whether you missed errors?" they had no real answer.

                                          I feel like people have been sold the idea that must provide productivity gains, and many don't bother to examine whether it really does.


                                            4 ★ 5 ↺

                                            [?]Anthony » 🌐
                                            @abucci@buc.ci

                                            DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million — a much smaller expense than the one called for by Western counterparts.

                                            These developments have stoked concerns about the amount of money big tech companies have been investing in AI models and data centers, and raised alarm that the U.S. is not leading the sector as much as previously believed.

                                            The "Western counterparts" are claiming training a model might take years and billions of dollars. This has always been a hyped-up grift, with snake oil salesmen and con artists being showered with money and power. It's really quite amazing how profoundly unintelligent "the market" is in practice.

                                            The sad reality is that the US could lead in this field (1), if we'd stop routinely putting narcissists and con artists in charge and showering them with praise even when they fail.

                                            From https://www.cnbc.com/2025/01/27/nvidia-falls-10percent-in-premarket-trading-as-chinas-deepseek-triggers-global-tech-sell-off.html

                                            (1) Putting aside whether we should, which is an important question.

                                              1 ★ 3 ↺

                                              [?]Anthony » 🌐
                                              @abucci@buc.ci

                                              The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising. There is a longstanding tradition of overselling the latest technology, claiming it to be the next (industrial) revolution or promising that it will outperform human beings. With the passage of time it may become difficult to recognize these invented ideas and images that have acquired a life of their own and have become integrated as part of a historical narrative. As modern, digital electronic computing is nearing its 100th anniversary, such recognition does not become easier, though we may be in need of it more than ever before.

                                              This particular case, where the praise of automatic programming implied the obsolescence of the coder, can be instructive for us today. There is a line that runs from Grace Hopper’s selling of “automatic coding” to today’s promises of large AI models such as Chat-GPT for revolutionizing computing by automating programming or even making human programmers obsolete.19,20 Then as now, it is certainly the case that the automation of some parts of programming is progressing, and it will upset or even redefine the division of labor. However, this is not a simple straightforward process that replaces the human element in one or more specific phases of programming by the computer itself. Rather, practice adopts new techniques to assist with existing tasks and jobs. Such changes do not generalize easily, and using titles as like “coders”—or today’s “prompt engineers,”—while memorable, does not do justice to the subtle process of changing practice.

                                              From https://cacm.acm.org/opinion/the-myth-of-the-coder/


                                                12 ★ 7 ↺

                                                [?]Anthony » 🌐
                                                @abucci@buc.ci

                                                The bubble has begun to burst. Users have lost faith, clients have lost faith, VC’s have lost faith.

                                                GenAI bubble, November, 2022 - July 2024, RIP.

                                                From: Five signs that the GenAI honeymoon is over
                                                https://garymarcus.substack.com/p/five-signs-that-the-genai-honeymoon


                                                  17 ★ 7 ↺

                                                  [?]Anthony » 🌐
                                                  @abucci@buc.ci

                                                  so much of the promise of generative AI as it is currently constituted, is driven by rote entitlement.

                                                  Very nice analysis by Brian Merchant ( @brianmerchant@mastodon.social ) here: https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

                                                  He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about . Clearly there are many angles from which to come at what's going on with , but I appreciate this one quite a bit.


                                                    14 ★ 8 ↺

                                                    [?]Anthony » 🌐
                                                    @abucci@buc.ci

                                                    Sorry. Your Car Will Never Drive You Around.

                                                    Brutal takedown of the bullsh&*# that is "self-driving cars": https://www.youtube.com/watch?v=2DOd4RLNeT4

                                                    It's a long video but frankly you can get the gist of most of it by scanning over the chapter titles. "Hitting Fake Children". "Hitting Real Children". "FSD Expectations" is a long list of the various lies has told about "full self driving" Teslas. Also the "Openpilot" chapter has a picture of Elon Musk's face as a dartboard.

                                                    The endless hype and full-on lies of the self-driving-car con from roughly 2016 to 2020 resembles the about like going on right now. If you've been in this industry long enough and have been honest with yourself about it you've seen all this before. Until something significant changes we really ought to view anything coming out of the tech sector with deep suspicion (https://bucci.onl/notes/Another-AI-Hype-Cycle).


                                                      2 ★ 1 ↺
                                                      emenel boosted

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      Regarding the last boost, Kickstarter bizarrely pivoting to blockchain because they were essentially paid to: I wonder how much of the is being paid for too.

                                                        3 ★ 4 ↺

                                                        [?]Anthony » 🌐
                                                        @abucci@buc.ci

                                                        Honestly, besides all the basic economic reasons that this cannot last, the application area is one math theorem away from imploding.

                                                        All it'd take is one clever math result demonstrating you don't need absolutely gigantic neural networks trained on mind-bogglingly-huge datasets to achieve the AI goals of most companies, and NVIDIA's hardware dominance evaporates. Why would you spend thousands or tens of thousands of dollars on a GPU that uses 300 Watts of power when you could achieve the same thing with an ASIC or FPGA that uses 3 Watts? This is already true for many applications, but apparently it hasn't been widely realized yet. It'll be hard to ignore if/when it becomes true for the vast majority of applications. Which it could.



                                                          2 ★ 0 ↺

                                                          [?]Anthony » 🌐
                                                          @abucci@buc.ci

                                                          Their market valuation jumped by over $200 billion in one quarter. They are reporting profit margins near 75%. This is a broken market--a bubble--and it cannot last. It will burst eventually and it's looking like it could take trillions of dollars with it. Since NVIDIA is largely growing like this because of the widespread use of its hardware in AI / LLM applications, these applications are driving the bubble.

                                                          This is very reminiscent of the dot-com bubble, which expanded and then popped when I was in my early 20s.


                                                            0 ★ 1 ↺

                                                            [?]Anthony » 🌐
                                                            @abucci@buc.ci

                                                            Regarding the last boost, Nature should not be playing the patsy by uncritically sharing . It already made sense to be skeptical of what was published in it, but now that it's brimming over with these breathless and absurd claims about it's about to earn a place in the "don't read unless you have to" column.


                                                              3 ★ 4 ↺
                                                              planetscape boosted

                                                              [?]Anthony » 🌐
                                                              @abucci@buc.ci

                                                              From https://danmcquillan.org/house_of_lords.html
                                                              The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.

                                                              From https://danmcquillan.org/ai_thatcherism.html
                                                              One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce.

                                                              Same source:
                                                              Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour.

                                                              Dan McQuillan (@danmcquillan@kolektiva.social) has been excellent on AI .