buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
"Congress set to reject Trump’s major budget cuts to NSF, NASA, and energy science."
https://www.science.org/content/article/congress-set-reject-trump-s-major-budget-cuts-nsf-nasa-and-energy-science
"The U.S. Congress has delivered another rebuke of President Donald Trump’s plans to slash this year’s budgets of several science agencies. Today, lawmakers hammering out final bills covering the National Science Foundation (#NSF), #NASA science, and Department of Energy (#DOE) research programs unveiled an agreement to spend very close to current levels."
#DefendResearch #Funding #Trump #TrumpVResearch #USPol #USPolitics
📬 In this issue of #AdvancesInComputing:
🎙️ Personal narratives from computing visionaries on the power and promise of federally funded research
🧵 and more from Interactions HCI
The Python Software Foundation just had to pass on a US$1.5M grant from the National Science Foundation for PyPI maintenance. One condition of the funding was that the PSF drop any DEI efforts, and if this condition is violated, the NSF can claw back the money even if it's already been spent. That clawback is too risky, so the PSF had to pass on the funding. (This is pretty shameful: PyPI security is a broad benefit, but of course the administration prefers to grind its own ax here.)
If you develop #Python code or rely upon it for your business, it'd be great if your company could become a sponsor of the PSF, or if you could donate personally to the PSF.
#NSF #NSFFunding #funding #FederalFunding #GrantFunding #science #ScienceFunding #AntiScience #AntiIntellectual #USPol
https://mastodon.world/@Mer__edith/113197090927589168
Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the 'bigger-is-better' AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society.Currently this is on #arXiv which, if you've read any of my critiques, is a dubious source. I'd love to see this article appear in a peer-reviewed or otherwise vetted venue, given the importance of its subject.
I've heard through the grapevine that US federal grantmaking agencies like the #NSF (National Science Foundation) are also consolidating around generative AI. This trend is evident if you follow directorates like CISE (Computer and Information Science and Engineering). A friend told me there are several NSF programs that tacitly demand LLMs of some form be used in project proposals, even when doing so is not obviously appropriate. A friend of a friend, who is a university professor, has said "if you're not doing LLMs you're not doing machine learning".
This is an absolutely devastating mindset. While it might be true at a certain cynical, pragmatic level, it's clearly indefensible at an intellectual, scholarly, scientific, and research level. Willingly throwing away the diversity of your own discipline is bizarre, foolish, and dangerous.