buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
I'm positively surprised to see so much sense coming out of #Google #Deepmind, for a change. What's going on?A guess? Critihype: hype of their viewpoints and methods clothed in what appears to be criticism, in hopes that people like us spread it (thereby achieving the goal of hyping themselves). Google has regularly done just this for roughly two decades at this point. I won't read corporate PR wrapped in a lab coat from such clearly compromised labs with obvious, deep conflicts of interest. However, that'd be my guess. A lot of these folks fight amongst themselves about whether AGI is possible, whether doom or utopia will result from attempts at creating it, and other religious nonsense.
I would suggest that folks who think using AI is great for mathematicians should think again. It seems as little as 10 minutes of use can be problematic. What else do we know that provides short-term gains at the expense of long-term loss?
Here, through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence for two key consequences of AI assistance: reduced persistence and impairment of unassisted performance. Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up. Notably, these effects emerge after only brief interactions with AI (approximately 10 minutes). These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.From AI Assistance Reduces Persistence and Hurts Independent Performance, on arXiv https://arxiv.org/abs/2604.04721
#AI #GenAI #GenerativeAI #AgenticAI #AIAssistants #CognitiveImpairment #math #MathematicalReasoning #ReadingComprehension
I grew up in rural Pennsylvania, and though the sex ed then and there was a tiny bit better than what this author describes experiencing in Arkansas, it was not by much. I wonder sometimes whether non-Americans grasp how backwards and regressive US culture can be. Anyway, an attempt to adapt the rhetoric of abstinence-only sex "education" to shame AI critics is complicated for this reason. E.g., arguments about the need to abstain from use of AI might actually work on some people. It might fall flat or even raise the ire of others who had bad experiences. Putting on my evil tech marketer hat, I'd avoid this frame because of the complexity and unpredictability about how it might land (I don't actually have this hat). There are probably some effective wedges to drive here, and I think the linked article hits on one.
We all have good ethical and political reasons to reject the president’s words. But those who serve in government, and in the armed forces, have been placed under the legal shadow of genocide by what Trump wrote. To bomb a bridge or a dam or a power plant or a desalinization facility, very likely a war crime in any event, could very well have a different legal significance, a genocidal one, if it takes place after the expression of genocidal intent by the commander and head of state.From https://snyder.substack.com/p/the-president-speaks-genocide
What appears as critique – yearning for smaller, weirder, more human spaces – often functions as brand repair. Netstalgia becomes a strategy: it restores trust without redistributing power, softens anger without changing infrastructures and reframes structural problems as matters of vibe, design or community feeling."Am I working on change, or am I working on brand repair?" is an important question to ask oneself regularly, it seems to me. It's especially relevant for the tech sector, open source, and computer science.
#AI #GenAI #GenerativeAI #LLM #tech #dev #software #OSS #FOSS #ComputerScience
I've seen similar language around AI, which is also a project of the powerful, used to stifle reasonable debate about this technology. So, I'm quite sensitive to this rhetoric.
In the before times (5+ years ago), very few cared who was joining the network. (Notice the "network", this place isn't Mastodon and never was.) When someone joined, it was seen as a good thing no matter who that was, because it made the network larger, the decentralization was spreading. But in the last 5 years, the goals seemingly shifted. Suddenly more people on here turned to a bad thing, a decentralized network meant to allow anyone to have a voice turned into a fractured space of gatekept echo-chambers with very little bridges between them. Some might say, that is the result of not gatekeeping the today's gatekeepers, but I don't really care and still mostly have the old mindset in my mind. It is more of a reflection on how humanity changed.I've been using "the network" since the days of USENET, 1990 onward, and I can attest that, at least in my experience, none of this rings true even a little.
Even so, the discourse I'm responding to is about Mastodon, not about some nebulous or idealized "network". Goalpost shifting is not constructive.
Nobody has "power" hereOf course we do. I have the power to block whoever I want and whichever hashtags I want, for instance. I also have the power to restrict who registers an account on my fediverse instance. You are not permitted to join my instance, and in that sense I very much have power over you: I am able to restrict your liberty. You may not want an account and I don't blame you, but that doesn't change the equation.
I said nothing about excluding people from the network. I literally said "excluding people and topics they don't wish to interact with". You seem to be arguing against something that wasn't said, which is not constructive.
Oh, and if anyone cares, my little gatekept and bridgeless corner of the fediverse is quite lovely, thanks, and grand proclamations about fractured spaces or whatnot have no bearing whatsover on this simple reality.
excluding anyone from this network is equivalent for both cases, marginalized groups and "AI people".These are obviously not equivalent in any sense that matters. You might as well include "people who love putting topsoil on their pizza" as a marginalized group because someone said "eww" once. Superficial associations like this sound disingenuous to my ears, and in any case are not constructive.
And that it isn't healthy.Why would excluding "AI people" in particular be unhealthy? What exactly are the ill effects?
And absolutely I've seen a bunch of people say rude stuff to @olivia@scholar.social on here. Ugly stuff, undeserved.
Yes, a lot of you don't want AI posts in your feed (or pick any other topic) but the solution isn't to keep "AI People" from joining MastodonIf this were not a disingenuous strawman---because it's impossible for one thing---I'd ask "why not?" I wouldn't invite the "AI People" I've encountered into my house either, because I've found them to be unpleasant and I get to choose who enters my space. This solution has worked quite well for me over the years.
It seems to me that what this person is saying is that people should give up the power they have---namely, their power to exclude people and topics they don't wish to interact with---because it favors them. That's a typical rhetorical move of AI boosters: demanding you give up your power because you having and exercising that power inconveniences them.
any more than it is keeping marginalized communities off of Mastodon.One should ask why this person chose to use the most offensive possible metaphor to make their case for inclusion. It's almost as though they don't believe the argument their words are shaped into resembling.
avoidance purity is incompatible with increasing AI literacy"Avoidance purity" is both a strawman and a dogwhistle. Nobody serious is doing either of these things, and a lot of bad actors use this phrase to cudgel people into submission or sow doubt. A strange take, frankly.
That said, the conclusion is false. I practice an extreme form of avoidance purity when it comes to experimenting with whether murder would enhance my life. Nevertheless, I am "murder literate". I contend the overwhelming majority of folks can say the same.
(I recognize that I too am whacking a strawman, but this is for effect; the point gestured at stands regardless).
#tech #dev #computers #AI #GenAI #GenerativeAI #advertising #InformationPollution
(which led to the suicide of a good player recently).Are you referring to Daniel Naroditsky?
I’ve made this point before about how inane AI hype is now, but a computer beat the best chess player in the world in 1997. No one pretended, after 1997, it wasn’t worthwhile to have humans compete in chess. In fact, the world of chess developed strict protocols around computer use and you can get banned from tournaments if you use a computer program as you play. You are certainly shamed and mocked.#AI #GenAI #GenerativeAI #AIHype #LLMs #writing #tech #dev #coding #SoftwareDevelopment #SoftwareEngneering #softwareAI and writing needs to be treated the same way. I do think people should be shamed for using AI to help them write creatively. It’s an embarrassment, and a form of cheating.
I agree completely that none of this is good!
Microsoft rushed Azure out of the gates under intense competitive pressure. Corners were cut. Fundamental principles of reliability and operational simplicity were quietly abandoned.Meaning all this came straight from the top. "Intense competitive pressure" is self-induced.
Fortunately this kind of thing, rushing software out the door under self-induced competitive pressure, doesn't happen anymore. Organizations have learned their lessons about the perils of operating this way. (/s)
Layered on this chaos was an Azure-wide mandate: all new software must be written in Rust.LOL
On a more serious note: LMAO
On top of all that, the org had a hard commitment to deliver the already long-delayed OpenAI bare-metal SKUs that had been promised for years. This work started around May 2024 with a target of Spring 2025 and was led by a Principal engineer who had evidently never tackled a task of that scale.This detail really struck me. Microsoft's deep internal dysfunction drove OpenAI right into the outstretched arms of Datacenter Enron.Fast-forward to March 10, 2025: OpenAI signed an $11.9 billion compute deal with CoreWeave for model training and services.
An unbelievable series thanks for sharing. I feel like @davidgerard@circumstances.run could make quite a bit of hay out of this one.
Claude code works....
All I know is at the receiving end it feels like it is Christmas every day because the tool does stuff that helps novices do stuff.Claude Code "works" in the way that slot machines "work" for gambling addicts or Christmas "works" for children.
Claude Code deliberately sets up an addiction loop the way casino gambling machines do, inducing the perception that it is helping when more and more data shows that it does exactly the opposite. It does not help novices "do stuff". Rather, it deskills them, prevents them from learning, and passes the negative consequences of these effects downstream to someone else, all while fooling them into believing they are being more productive.
The reality is that insurance companies more and more won't insure companies that lean on AI for exactly this reason: the downsides are not documented and therefore not auditable, and are ultimately pushed outside the company, which introduces liability and other loss risk. Companies are of course free to take on needless and unaccounted-for internal risk, but insurance companies won't cover it and that is a very important signal about the actual real-world value of this technology.
"Christmas every day" is a phrase gambling addicts use too. But somebody has to clean up the wrapping paper and replace the batteries that die and dispose of the toys that break or are discarded after one use. Somebody has to buy them in the first place, and somebody has to make them for them to be available to buy. Christmas "works" for children, but it's a temporary illusion created for their benefit, not the basis for a sustainable workflow, business, or economy.
If software developers largely perceive that most of their contemporaries are using LLMs and that expressing something negative about it won't have any material effect on that reality, they take no risks expressing concerns about the technology.