buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Like Clockwork: AI Hype Machine Spins Up After Bubble Cracks Form
It was something I suspected would happen, but the AI hype machine is roaring at full speed once again.
https://www.freezenet.ca/like-clockwork-ai-hype-machine-spins-up-after-bubble-cracks-form/
#Business #News #AI #hype #investing #journalism #Media #StockMarkets
It was cute when US corporate was pushing "Democrats want to #abolishice" for a hot minute until the discussion landed in #119congress.
Republicans and their loyalist agents online will #hype up anything they can to distract from #Epstein and their national security failures stacked on top of civil rights violations.
#ICEout
#NoKings !
@gutenberg_org
Yes, only they did not use #AI.
The paper itself doesn't even once mention the words "AI" or "Artificial Intelligence". Not once. And that's done purposefully by the researchers, who respect their own work and their colleagues/readers reading their paper. Their model used ModernBERT, not GPT or any other LLM.
The use of vague, over-hyped marketing buzzwords like "AI" blurs the work of the researchers who went into great lengths in their paper to describe how they did it.
In unrelated news, Microsoft is asking Microsoft Windows users to uninstall a recent Microsoft Windows update, issued and published by Microsoft, because said update is breaking Microsoft Windows.
https://www.windowscentral.com/microsoft/windows-11/microsoft-urges-uninstalling-the-update-kb5074109-after-bug-reports-on-windows-11-heres-how
Every single piece about this should be mentioning how Satya Nadella bragged how 30% of new Microslop code is AI-generated:
https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-as-30percent-of-microsoft-code-is-written-by-ai.html
Not providing this context is journalistic malpractice.
Microslop's CEO is on a roll!
Few weeks ago he begged us to stop using "slop" because it makes AIs sad. Days ago he complained "AI boom might falter" if we don't start using more spicy autocomplete.
Now he's begging developers to "do something useful" with lying machines, or they might lose the "social permission" to boil the planet.
https://www.techradar.com/ai-platforms-assistants/microsoft-ceo-urges-ai-developers-to-get-to-a-point-where-we-are-using-this-to-do-something-useful-or-lose-even-the-social-permission-to-generate-these-tokens
What, is Microslop's Slopilot services not useful enough on their own?
This is interesting, but not for the obvious reasons:
The era of Photoshop may be ending, as Adobe stocks take a battering
https://finance.yahoo.com/news/era-photoshop-may-ending-adobe-165724311.html
tl;dr Adobe's stock is down because "AI" so Yahoo concludes this is the end of the "Photoshop era".
What's interesting here is that "Photoshop era" is ending – according to Yahoo – not because users are turning away. It's because *investors* are.
The "market" for software and services is a side-show. Only the stock market matters.
1/2
How #AI #hype muddies the work of scientists (apparently) without their approval.
#NASA's scientists have created a tool that uses Convolutional Neural Networks to validate Kepler and TESS exoplanet signals. The 28 author paper, https://iopscience.iop.org/article/10.3847/1538-3881/ae03a4/pdf, a treatise of how to use NNs to reliably perform difficult classification tasks, describes in painstaking detail the process of preparing the data and extracting various features.
𝐓𝐡𝐞𝐫𝐞 𝐢𝐬 𝐧𝐨 𝐦𝐞𝐧𝐭𝐢𝐨𝐧 𝐰𝐡𝐚𝐭𝐬𝐨𝐞𝐯𝐞𝐫 𝐨𝐟 𝐀𝐈 𝐢𝐧 𝐭𝐡𝐞 𝐩𝐚𝐩𝐞𝐫.
But
1/
AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
https://www.irishtimes.com/business/2026/01/20/ai-boom-could-falter-without-wider-adoption-microsoft-chief-satya-nadella-warns/
Oh dear me are people not "adopting" lying autocomplete widely enough to keep the line going up? Oh noes. Nobody could have foreseen this! 🤯
I cannot wait for the official announcement of a new #Microslop programme, Adopt-an-AI. 🤣
What an absolute tool. "The bubble might burst if you all don't help pump it!" 🤡
If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".
Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
https://www.nature.com/articles/s41586-024-07566-y
Every "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.
Which is why they desperately seek out any and all human-created text out there.
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-too
But today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! 🤡
And I have continually found nothing compelling.
Worse, I have typically found very frustrating
examples of people using very strong but implied
assumptions and using logic that depends utterly
on using blinders and ignoring reason.
Until the hype dies, I am not interested in them.
I am still interested in the old AI stuff like
for example path finding, NNs, and markov chains.
Stanford AI Experts Predict What Will Happen in 2026...
For the last few years, we obsessed over whether AI could do the thing. 2026 is when the grown-up questions take over:
・ How well does it actually work?
・ What does it really cost?
・ And who benefits when the hype fades?
The shift is overdue. Expect fewer jaw-dropping demos and more uncomfortable audits of where AI truly boosts productivity and where it mostly boosts slide decks. Geopolitics will stop being a footnote and start shaping the stack. More countries will demand that their data, models, and compute stay on home soil. “Global by default” will quietly disappear. In science and medicine, the mood changes from wow to why. Getting the right answer is no longer enough. If a model is correct, people will want to know what inside it did the work. And in law and other high-stakes domains, the winners will not be the flashiest tools. They will be the ones that survive domain-specific scrutiny and function inside real workflows, not just pristine demos.
The AI era is not ending.
The experimentation phase is.
TL;DR
🧠 Evaluation replaces evangelism
⚡ AI sovereignty accelerates
🎓 Dashboards track work impact
🔍 Open the black box
https://hai.stanford.edu/news/stanford-ai-experts-predict-what-will-happen-in-2026
#ArtificialIntelligence #ResponsibleAI #AIstrategy #FutureOfWork #Hype
Gestern las ich reichlich zu #KI in meinem Feed. Tenor: es ist alles ein #Hype/eine Blase! Ich möchte (sicher) nicht in die Diskussion einsteigen, wie sehr das stimmt.
Ich möchte statt dessen erinnern: Blase oder nicht, die (auch aber nicht nur wirtschaftlichen) Effekte sind schon jetzt sehr real: für #Kreative, für #Synchron und Sprecher, für Entry Level Office Jobs! Wir stehen dort vor einem Verelendungstsunami.
La #InteligenciaArtificial, una disciplina a la que he dedicado prácticamente 20 años de investigación, está siendo devorada por las lógicas capitalistas que afectan tantísimas otras cosas que funcionan mal en nuestra vida. El #hype, la burbuja, la moda, el apetito insaciable o como lo llamamos en #IA, el #ColapsoModal.
La IA se basa en la estadística, en identificar patrones en conjuntos de datos que ayudan a predecir unas variables en función de otras. Todo esto se consigue construyendo cálculos basados en combinaciones de los datos de entrada que a su vez resulten en una aproximación al valor que queremos predecir y a su vez definiendo función objetivo: una forma de evaluar lo lejos que está la predicción de la respuesta correcta, de forma que podamos variar ligeramente los cálculos para acercarnos cada vez más. Bien, pues si no tenemos cuidado, podríamos definir esa evaluación de forma que los cálculos acaben haciendo una prediccion trivial: la moda de la distribución, el valor que más se repite. Predecir siempre que en Buenaventura, Colombia está lloviendo garantiza acertar 260 de 365 días al año, pero hace que el modelo no sirva de nada porque en realidad no predice nada sino que repite la #moda. Esto pasa también con las IAs generativas, desde las GANs a los modelos lingüísticos. Y el problema se hace peor si datos generados con IAs se usan para entrenar otras IAs, asegurando el colapso modal. La IA encuentra que puede optimizar desde la #pereza: si lo apuesto todo al valor que más recompensa me da, no tengo que preocuparme de ver más allá.
Video: The Great AI Scam – How One Industry Fooled So Many for So Long
Welcome to episode 4 of my news video’s. Today’s topic is how the AI industry is pushing its overall scam on people.
https://www.freezenet.ca/video-the-great-ai-scam-how-one-industry-fooled-so-many-for-so-long/
#Editorial #News #AI #ArtificialIntelligence #hype #investing
Bundesminister für Digitales in seiner Rede: »Zum ersten Mal übertreffen Maschinen den Menschen in dem, was uns bisher einzigartig machte: unsere Intelligenz.«
Auweia. Taschenrechner »übertreffen« mich auch – trotzdem sind sie keine Mathematiker. KI ist Statistik: Vorhersagen, imitieren, plausibel klingen. Verstehen? Null. Bewusstsein? Null. Verantwortung? Null. Wer hier von »Intelligenz« spricht, zeigt vor allem, wie wenig er davon versteht.
/kuk
Ich bin froh wenn die #ki #hype #blase endlich platzt.
Vorteile:
- Festplattenpreise gehen runter
- Nervende KI Buttons überall verschwinden
- Stomverbrauch sinkt
- Deepfakes und AI Slop wird weniger
- Kunst wird wieder Kunst
- Finanzmarkt entspannt sich (AI Aktien sind aktuell der Sh*t)
Nachteile: Menschen müssen wieder selbst denken…
I took a course with Dr Blit. His opening statement was, "I just sold all of my tech stocks" and "AI is the new electricity" and "generative AI can do everything in descriptive analytics".
That was all I needed to know that he was a hype artist on par with #Altman, #Nadella, and #Musk.
Try not to laugh too hard at the hyperbole.
What I thought then is still true today: to make something like a software agent legitimately useful for a lot of people would require a large amount of low-level grunt work and non-technical work (2) of the sort that the typical Silicon Valley company is unwilling to do. (3) The technology is the absolute easiest part of this task. Throwing a Bigger Computer at the problem leaves all those other pieces of work undone. It's like putting a bigger engine in a car with no wheels, hoping that'll make the car go.
By the way #AI companies and VCs, I'm available for contract work and have done due diligence research before if you ever want to stop wasting everyone's time and money!
#AI #GenAI #GenerativeAI #LLM #agents #hype #SiliconValley #VentureCapital #dev #tech
(1) Which we've been told repeatedly is essentially infinite time in the tech world.
(2) Establishing semantic data standards and convincing a large enough number of people to implement them being an important component. LLMs do not magically develop protocols and solve all the ETL-style problems of translating among different ones. The Semantic Web didn't really stick for a lot of reasons, but one reason is that it's hard!
(3) Back when I was still in the startup world I was asked several times by VCs to tell them what I thought about some new startup that claimed to be able to magically clean and fuse data. I think they're still very keen on investing in this style of magic, because it requires an intense amount of human labor, but I think where companies landed was invisibilizing low-paid workers in other countries and pretending a computer did the work they did. Which has also been happening for well over a quarter of a century.
The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)
If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)
If you stick an LLM in the communication part, then you're putting yourself on the Retraction Watch list, not communicating.
#science #LLM #AI #GenAI #GenerativeAI #AIHype #hype
(1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.
(2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.
(3) Things are so dire that very few even seem to have the thought that this is something you should try to do.
(4) Yes, you might discover something while watching the LLM glop, but that's you, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.
I feel like people have been sold the idea that #GenerativeAI must provide productivity gains, and many don't bother to examine whether it really does.
DeepSeek launched a free, open-source large-language model in late December, claiming it was developed in just two months at a cost of under $6 million — a much smaller expense than the one called for by Western counterparts.The "Western counterparts" are claiming training a model might take years and billions of dollars. This has always been a hyped-up grift, with snake oil salesmen and con artists being showered with money and power. It's really quite amazing how profoundly unintelligent "the market" is in practice.These developments have stoked concerns about the amount of money big tech companies have been investing in AI models and data centers, and raised alarm that the U.S. is not leading the sector as much as previously believed.
The sad reality is that the US could lead in this field (1), if we'd stop routinely putting narcissists and con artists in charge and showering them with praise even when they fail.
#AI #GenAI #GenerativeAI #LLM #SnakeOil #hype #grift #MarketCapitalism
(1) Putting aside whether we should, which is an important question.
The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising. There is a longstanding tradition of overselling the latest technology, claiming it to be the next (industrial) revolution or promising that it will outperform human beings. With the passage of time it may become difficult to recognize these invented ideas and images that have acquired a life of their own and have become integrated as part of a historical narrative. As modern, digital electronic computing is nearing its 100th anniversary, such recognition does not become easier, though we may be in need of it more than ever before.From https://cacm.acm.org/opinion/the-myth-of-the-coder/This particular case, where the praise of automatic programming implied the obsolescence of the coder, can be instructive for us today. There is a line that runs from Grace Hopper’s selling of “automatic coding” to today’s promises of large AI models such as Chat-GPT for revolutionizing computing by automating programming or even making human programmers obsolete.19,20 Then as now, it is certainly the case that the automation of some parts of programming is progressing, and it will upset or even redefine the division of labor. However, this is not a simple straightforward process that replaces the human element in one or more specific phases of programming by the computer itself. Rather, practice adopts new techniques to assist with existing tasks and jobs. Such changes do not generalize easily, and using titles as like “coders”—or today’s “prompt engineers,”—while memorable, does not do justice to the subtle process of changing practice.
#ComputerScience #computers #computing #programming #dev #tech #hype #GPT #ChatGPT #Copilot
The bubble has begun to burst. Users have lost faith, clients have lost faith, VC’s have lost faith.From: Five signs that the GenAI honeymoon is overGenAI bubble, November, 2022 - July 2024, RIP.
so much of the promise of generative AI as it is currently constituted, is driven by rote entitlement.
He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about #GenAI. Clearly there are many angles from which to come at what's going on with #AI #hype , but I appreciate this one quite a bit.
Brutal takedown of the bullsh&*# that is "self-driving cars": https://www.youtube.com/watch?v=2DOd4RLNeT4
It's a long video but frankly you can get the gist of most of it by scanning over the chapter titles. "Hitting Fake Children". "Hitting Real Children". "FSD Expectations" is a long list of the various lies #Elon #Musk has told about "full self driving" Teslas. Also the "Openpilot" chapter has a picture of Elon Musk's face as a dartboard.
The endless hype and full-on lies of the self-driving-car con from roughly 2016 to 2020 resembles the #AI #hype about #LLMs like #ChatGPT going on right now. If you've been in this industry long enough and have been honest with yourself about it you've seen all this before. Until something significant changes we really ought to view anything coming out of the tech sector with deep suspicion (https://bucci.onl/notes/Another-AI-Hype-Cycle).
All it'd take is one clever math result demonstrating you don't need absolutely gigantic neural networks trained on mind-bogglingly-huge datasets to achieve the AI goals of most companies, and NVIDIA's hardware dominance evaporates. Why would you spend thousands or tens of thousands of dollars on a GPU that uses 300 Watts of power when you could achieve the same thing with an ASIC or FPGA that uses 3 Watts? This is already true for many applications, but apparently it hasn't been widely realized yet. It'll be hard to ignore if/when it becomes true for the vast majority of applications. Which it could.
This is very reminiscent of the dot-com bubble, which expanded and then popped when I was in my early 20s.
The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.
One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce.
Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour.