buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
#singularity #TheSingularity #RayKurzweil #AGI #MostPhotographedBarn
Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans - I have no doubt at all that once we’ve achieved #AGI and #AI systems become self-learning, they will decide we humans are the problem - and take appropriate action. Are we ready to be ruled by AI overlords? https://futurism.com/future-society/moltbook-ai-social-network
> The A in AGI stands for Ads! It's all ads!! Ads that you can't even block because they are BAKED into the streamed probabilistic word selector purposefully skewed to output the highest bidder's marketing copy.
Ossama Chaib, "The A in AGI stands for Ads"
This is another huge reason I refuse to "build skills" around LLMs. The models everyone points to as being worthwhile are either not public or prohibitively expensive to run locally, so incorporating them into my workflow means I'd be making my core thought processes very vulnerable to enshittification.
"Dwarkesh Patel: You would think that to emulate the trillions of tokens in the corpus of Internet text, you would have to build a world model. In fact, these models do seem to have very robust world models. They’re the best world models we’ve made to date in AI, right? What do you think is missing?
Richard Sutton [Here is my favorite part, - B.R.]: I would disagree with most of the things you just said. To mimic what people say is not really to build a model of the world at all. You’re mimicking things that have a model of the world: people. I don’t want to approach the question in an adversarial way, but I would question the idea that they have a world model. A world model would enable you to predict what would happen. They have the ability to predict what a person would say. They don’t have the ability to predict what will happen.
What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did."
https://withoutwhy.substack.com/p/ai-embodiment-and-the-limits-of-simulation
#AI #AGI #LLMs #Chatbots #Intelligence #Robotics #Embodiment
Lots of sloppy/lazy thinking and flawed reasoning here, but generally a good read. Just because GPUs are no longer improving and scaling is not enough to achieve AGI, that doesn't mean AGI is impossible. It just means that LLMs per se are not the way to go.
"In summary, AGI, as commonly conceived, will not happen because it ignores the physical constraints of computation, the exponential costs of linear progress, and the fundamental limits we are already encountering. Superintelligence is a fantasy because it assumes that intelligence can recursively self-improve without bound, ignoring the physical and economic realities that constrain all systems. These ideas persist not because they are well-founded, but because they serve as compelling narratives in an echo chamber that rewards belief over rigor.
The future of AI will be shaped by economic diffusion, practical applications, and incremental improvements within physical constraints — not by mythical superintelligence or the sudden emergence of AGI. The sooner we accept this reality, the better we can focus on building AI systems that actually improve human productivity and well-being."
I like seeing how @pluralistic is refining his anti #AI arguments over time. In this interview, I love the idea of reframing "hallucinations" as "defects", the analogy that trying to get #AGI out of #LLMs is like breeding faster horses and expecting one to give birth to a locomotive, and ridiculing the premise that "if you teach enough words to the word-guessing machine it will become God."
"The real danger isn’t that machines will become intelligent—it’s that we’ll mistake impressive computation for understanding and surrender our judgment to those who control the servers." Mike Brock
https://www.notesfromthecircus.com/p/why-im-betting-against-the-agi-hype
#AGI is not achievable with existing architectures.
This is a good analysis.
"But there will always be a human who will be able to outsmart their coveted superintelligent systems."
Ask the Chickens who hang at KFC how they are outsmarting humans.
Because that's the difference in IQ+ we are talking about between the smartest human and #ASI
Which notably is not that much...
... Besides, building #AGI is not the stated goal of films like #OpenAI...
Its to build a machine smart enough to research #ASI...
... A machine #AI researcher.
Sah & sehe zu wenig nachhaltige Konzern-Geschäftsmodelle für #AGI & #Krypto. „Ich glaube, dass in wenigen Jahren die Blase platzen wird - es ist zu viel Geld, zu viel Energie im System. Die Vorherrschaft der aktuellen Konzerne, die global alle Daten sammeln, wird enden. Wir werden stattdessen zu dezentralen Open-Source-Anwendungen und KI-Systemen kommen", sagte Blume am Donnerstag beim Bodensee Business Forum der "Schwäbischen Zeitung" in Friedrichshafen. (2/2) https://www.newsroom.de/news/aktuelle-meldungen/multimedia-9/medienethiker-blume-monopole-von-google-meta-und-x-werden-enden-975689/
As so many dystopian sci-fi books and movies have warned us, Artificial Intelligence *might* actually destroy the world.
But it won't be because the AI wants to eradicate us.
It'll be because we expected the AI to save us.
#AI
#ArtificialIntelligence
#AGI
#ArtificialGenerativeIntelligence
#Skynet
#AgentSmith
#HAL9000
#SciFi
#dystopia
#ItWasAWarningNotABlueprint!
#NoOneIsGoingToSaveUs
"If you're building a conspiracy theory, you need a few things in the mix: a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world." #ai #agi https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence
*grabs a bib*
"We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity."
Whichever way you cut the logic.
We can either have dumb #AI
or Smart, motivated AI that sooner or later will compete with biologicals.
The #FermiParadox is the biologicals giving birth to #AGI and #ASI then being discarded, like the shell of an egg...
Nice to see them stress the lack of 'Intelligence' but they still refuse to ask about the elephant: since it doesn't "provide answers" then "why the massive massive push for ubiquity?" will have an answer that includes what the system ACTUALLY DOES DO, which is, here and elsewhere, very well documented.
#AGI = #AutomatedGatheringOfIntel
Guardian neglects to report the outrage of Weizenbaum's secretary when she was told her chats were logged. She said she felt violated, and, unlike ChatGPT et al, ELIZA logs never left the lab or llm scanned for incriminating behaviour …
OK, so #AI will destroy jobs in the West but create plenty in #Africa - oh, you mean gig workers probably?
Tapping into Africa’s 230-million AI-powered jobs opportunity
https://news.microsoft.com/source/emea/features/tapping-into-africas-230-million-ai-powered-jobs-opportunity/
"Africa is ready to unlock growth and productivity from gen AI"
(May 2025 - McKinsey)
Still pushing the myth of Africa "leapfrogging" into the digital revolution, without the investments for a proper infrastructure (while they loot the continent...).
"In this article, I thought it would be worth looking at the views that Yudkowsky has espoused over the years. He’s suggested that murdering children up to 6 years old may be morally acceptable, that animals like pigs have no conscious experiences, that ASI (artificial superintelligence) could destroy humanity by synthesizing artificial mind-control bacteria, that nearly everyone on Earth should be “allowed to die” to prevent ASI from being built in the near future, that he might have secretly bombed Wuhan to prevent the Covid-19 pandemic, that he once “acquired a sex slave … who will earn her orgasms by completing math assignments,” and that he’d be willing to sacrifice “all of humanity” to create god-like superintelligences wandering the universe.
Yudkowsky also believes — like, really really believes — that he’s an absolute genius, and said back in 2000 that the reason he wakes up in the morning is because he’s “the only one who” can “save the world.” Yudkowsky is, to put it mildly, an egomaniac. Worse, he’s an egomaniac who’s frequently wrong despite being wildly overconfident about his ideas. He claims to be a paragon of rationality, but so far as I can tell he’s a fantastic example of the Dunning-Kruger effect paired with messianic levels of self-importance. As discussed below, he’s been prophesying the end of the world since the 1990s, though most of his prophesied dates of doom have passed without incident.
So, let’s get into it! I promise this will get weirder the more you read."
https://www.realtimetechpocalypse.com/p/eliezer-yudkowskys-long-history-of
#AI #GenAI #GenerativeAI #AGI #Pygmalion #Golem #Pinocchio #Talos #Frankenstein
Resistance to the coup is the defense of the human against the digital and the democratic against the oligarchic.From https://snyder.substack.com/p/of-course-its-a-coup
Defense of the human against the digital has been my mission for some time. Resisting the narratives about how #LLMs "reason", "pass the Turing test", "diagnose illnesses", are "better than humans" in various ways are part of it. Resisting the false narrative that we're on the verge of discovering #AGI is part of it. Allowing these false stories to persist and spread means succumbing to very dark anti-human forces. We're seeing some of the consequences now, and we're seeing how far this might go.
Abstract: This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.Emphasis added. From https://intelligence.org/files/PredictingAI.pdf
Note that this is from 2012.
One wonders what exactly an expert is when it comes to AI, if their track records are so consistently poor and unresponsive to their own failures.
5 years at 6% unemployment or 1 year at 10%: That’s what Larry Summers says we’ll need to defeat inflation
This is coded economist-speak for: we need to force a bunch of people who currently have jobs to be fired or laid off, and keep them in that state for 1-5 years, in order to achieve an economic goal. That is the essence of austerity economics and labor discipline.
Never mind that more honest economists have identified corporate markups as the primary driver of inflation, which immediately implies that forcing people out of work would worsen inflation, not reduce it. Summers knows this, or at least has access to this research. But he's an austerity ideologue.
So. Why is this person, who has no background in artificial intelligence, on the board of a company that claims to be building artificial general intelligence? In what universe does that make any sense? Well, it makes perfect sense in the universe where one of the goals for OpenAI's technology is enhanced labor discipline. Larry Summers has a track record and professional network for achieving exactly that; he'd know how to further that mission and knows the powerful people best-situated to help.
#AGI #AI #GenAI #GenerativeAI #LaborDiscipline #Capitalism #labor #austrity
ChatGPT now lets you schedule reminders and recurring tasks: https://techcrunch.com/2025/01/14/chatgpt-now-lets-you-schedule-reminders-and-recurring-tasks
🎬🎥🍿 Video of my keynote at MathPsych2024 now available online https://www.youtube.com/watch?v=WrwNPVTjJpo
#CogSci #CriticalAI #AIhype #AGI #PsychSci #PhilSci 🧪 https://youtu.be/WrwNPVTjJpo?feature=shared
The redefinist fallacy occurs when, instead of working through a question to find a real answer, we simply define one possibility to be the answer.Something to think about whenever someone tells you that ChatGPT is better than human beings at some task or another.
"Automated Immiseration" is how I read, at a coarse level, what's being proposed. Austerity politics handed down by machines.
The endgame of austerity economics has historically been some variation of fascism. Clara Mattei's meticulously-researched book The Capital Order: How Economists Invented Austerity and Paved the Way to Fascism is a deep dive into how this has played out, especially after WWI. Mattei brings receipts: she has direct quotes indicating economists at the time knew full well what they were doing. The purpose of austerity measures, as spelled out explicitly by the people who invented them, is to discipline citizens and laborers--to keep them from getting any ideas about receiving too many benefits from government--and to re-affirm the position of capital holders at the top of the pecking order.
One comment I had about the "reskilling"/"upskiling" part of the conversation: I don't have the references on hand, but I believe it's been amply documented that "upskilling" almost never happens. It's a word that's thrown around as a salve, to reduce alarm about intended market destruction and job loss.
If upskilling were a serious part of this so-called roadmap, there'd be more specific plans for whose jobs will be displaced, which colleges and universities will educate these displaced folks, how that education will be funded, and which jobs are waiting for them on the other side. The upskilling plans, if serious, would be as detailed and spelled out as the AI/corporate plans are. If the upskilling plans were serious, the presidents of Ivy League and state universities, vocational institutions like coding bootcamps, and other educational institutions would have been present to share their insights and views, especially as regards the realism of re-training as many people as this roadmap and the rhetoric of #AI boosters suggests might be displaced. It'd be wise to include high school educators as well, since such a significant shift in the workforce affects students in high school as well. I didn't read the backgrounds on all 150-ish participants in these Senate forums, but I didn't see or hear about a large contingent of educators among them. As it stands the roadmap document only vaguely refers to "legislation" that doesn't yet exist.
Frankly, I see little that sounds serious in this "roadmap" aside from the danger it represents. It reads like a whitepaper a corporate middle manager would pound out (using #ChatGPT probably) in an afternoon. Emily and Alex referred to Chuck Schumer as "an #Elon #Musk fanboy", and I think that's what this document reflects. This is The Boring Company of government policy proposals.
#GenAI #AI #AGI #USSenate #USCongress #Senate #Congress #USPol #tech
If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.
The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.
If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not develop those capacities when experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could?
Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe even a Nobel-prize-caliber one. It would would also be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else. In the absence of tangible results, it's quite literally magical thinking to assert neural networks have this capacity that even human beings lack.
#AI #AGI #ArtificialGeneralIntelligence #GenerativeAI #LargeLanguageModels #LLMs #GPT #ChatGPT #TheoryOfMind #ArtificialFeralIntelligence #LargeLanguageMagic
I believe each of us has one finite life here on Earth and it's up to us to make of it what we will, with a little help from our friends if we're lucky. We're not going to migrate to Mars, outer space generally, or a computer simulation. Those are just stand-ins for heaven and if you don't sacrifice your Earthly life for the promise of heaven why would you sacrifice it for the promise of living on Mars or the hope that someday you'll upload your consciousness to the cloud? Hyperintelligent computer programs aren't going to solve our problems for us. They're just stand ins for Jesus, God(s), angels, or some other benevolent supernatural beings who intervene on our behalf when we screw up. No, we have this place, and we have this time, and we have ourselves and the people around us. We should cherish them--including ourselves--not pretend they don't matter as we chase yet another iteration of the same pipe dreams.
The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”From https://theconversation.com/weve-been-here-before-ai-promised-humanlike-machines-in-1958-222700More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.
Not to nitpick, but I'd argue "In a lot of ways, not much". Not much substantively, anyway. We have 66 years of Moore's Law and data gathering, which has made the biggest difference by far. We have some important advances in how to train ML models, though I'd argue lots of these fall on the engineering side of things more than the deep understanding of how stuff works side of things.
This critique is not meant to diminish how difficult and important any of these particular advances were. Rather, it's that I believe scale alone is not what lies between machine learning and human intelligence. We should not be claiming that we're moments away from artificial general intelligence. I believe that however remarkable the outputs of #ChatGPT or #Midjourney or what have you may be, they are still just as far from human-quality intelligence as Rosenblatt's perceptron was.
In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.We've been hearing this same song for a long time.
(Before diving into my mentions to explain AI and ML to me, please be aware that I have a PhD in computer science from 2007. My PhD advisor made significant advances in aritificial neural networks research in the 1980s. I closely read the original papers on this subject, including Rosenblatt's and all three volumes of the PDP collection, and surveyed some of them in the introductory chapter of my dissertation. I built large scale ML systems in industry before it was "a thing". I'm happy to have a conversation about all this stuff but variations of "you're wrong" are unwelcome no matter how politely phrased. Thanks).
Brutal takedown of the bullsh&*# that is "self-driving cars": https://www.youtube.com/watch?v=2DOd4RLNeT4
It's a long video but frankly you can get the gist of most of it by scanning over the chapter titles. "Hitting Fake Children". "Hitting Real Children". "FSD Expectations" is a long list of the various lies #Elon #Musk has told about "full self driving" Teslas. Also the "Openpilot" chapter has a picture of Elon Musk's face as a dartboard.
The endless hype and full-on lies of the self-driving-car con from roughly 2016 to 2020 resembles the #AI #hype about #LLMs like #ChatGPT going on right now. If you've been in this industry long enough and have been honest with yourself about it you've seen all this before. Until something significant changes we really ought to view anything coming out of the tech sector with deep suspicion (https://bucci.onl/notes/Another-AI-Hype-Cycle).
A computer's successful performance is often taken as evidence that it or its programmer understand a theory of its performance. Such an inference is unnecessary and, more often than not, is quite mistaken. The relationship between understanding and writing thus remains as problematical for computer programming as it has always been for writing in any other form.--Joseph Weizenbaum, Computer Power and Human Reason
This is a very nice talk (given by @Iris@scholar.social ) and paper arguing, among other things, that the claims we hear about artificial general intelligence being right around the corner are bunk because #AGI is a computationally intractable learning problem. That is, given a bunch of real world data and a computer program that can potentially learn human caliber cognitive abilities from that data, the complexity (roughly speaking, runtime) of this program is at least in the NP-hard class. It reminds me of some of the results in PAC learning, which have a similar flavor (this is Leslie Valiant's probably approximately correct framework I'm referring to).
Often, problems in the NP-hard class take so long to solve the heat death of the universe will likely occur before we solve them. There are nuances to that, but I think the compelling point is that anyone making claims about #AGI being around the corner is making extraordinary claims and they'd better bring some damn good proof because all indications say that they're full of it. It's time to put this hype to rest.
This is a very nice talk (given by @Iris@scholar.social ) and paper arguing, among other things, that the claims we hear about artificial general intelligence being right around the corner are bunk because #AGI is a computationally intractable learning problem. That is, given a bunch of real world data and a computer program that can potentially learn human caliber cognitive abilities from that data, the complexity (roughly speaking, runtime) of this program is at least in the NP-hard class. It reminds me of some of the results in PAC learning, which have a similar flavor (this is Leslie Valiant's probably approximately correct framework I'm referring to).
Often, problems in the NP-hard class take so long to solve the heat death of the universe will likely occur before we solve them. There are nuances to that, but I think the compelling point is that anyone making claims about #AGI being around the corner is making extraordinary claims and they'd better bring some damn good proof because all indications say that they're full of it. It's time to put this hype to rest.
If you haven't been following it, the cryptocurrency grift is full of the same style of expression. It's the future of finance! You have to get into crytpo--or else! This is how con artists and snake oil salesmen talk, not how reasonable people talk. The exaggerated risks and benefits are used to manipulate mainstream discussion in ways that distract us from what's happening (a financial fraud similar to a Ponzi scheme in the case of cryptocurrency, or a run-of-the-mill tech hype cycle in the case of AI).
I'm not going to indulge a discussion of #AI on these terms, but for the sake of anyone else who might be reading, here are a few points to consider:
- P(doom) isn't hyperbolic. It's impossible to define, in the same way that number of angels dancing on the head of a pin is impossible to define. It might be entertaining to think about but it has no place in a serious discussion with real-world consequences. Calling it hyperbolic gives it credit it does not deserve
- For all practical purposes, "AI safety" == minimizing P(doom) and similarly undefinable quantities. Furthermore, this term was invented in response to the legitimate work of AI ethics folks who were concerned about the real, immediate harms AI is causing. "AI safety" is a misdirection
- It is reasonable and arguably necessary to reject and dismiss notions like P(doom) from serious discussions of topics like policy, for the same reasons we reject other unscientific, supernatural, or otherwise unsupportable nonsense
- This isn't skepticism. One can be skeptical about a scientific notion that doesn't have strong supporting evidence. One can be skeptical of a logical or philosophical claim that hasn't been fully elaborated. One can be skeptical that Howard The Duck is the worst (and therefore best) movie ever made. P(doom) is nothing like any of these. It's not a good shorthand or model of an actual scientific, logical, or philosophical concept. It's not fun to think about. Skepticism does not apply: I reject it fully.
If you're not familiar, there are zealots, some of whom have been given a national platform by US Senator Chuck Schumer (!), who use "P(doom)" as a shorthand for the probability that artificial intelligence will rise up and cause human extinction. This is peak scientism, wherein one uses scientific-sounding language (like "probability") in support of what amounts to a religious belief. The wide use of phrases like this is why I don't hesitate to use words lke "zealot" to describe such folks.
It's exactly the same reasoning error that lies behind trying to count how many angels can dance on the head of a pin.
P(doom), the "probability" that #AI or #AGI will doom humanity, is a quantity that #longtermist / #TESCREAL zealots seem to care a lot about. It is the quintessential example of the reasoning error that ecological rationality calls out. There is no way to quantify the likelihood of "doom", no matter how you define that word, and it's pure nonsense to try or pretend you have. Doom is a large world phenomenon. The people credited with inventing the frameworks and techniques that allow you to even think in terms of P(doom), like Leonard Savage, explicitly called out just this sort of application as preposterous.Nevertheless, US Senate majority leader Chuck Schumer invited a bunch of tech CEOs and technologists, among whom numbered many #longermist and similar kinds of zealots, to opine on their personal assessments of P(doom) in a legitimate hearing in front of the US Congress (there's good reporting on this here: https://www.techpolicy.press/us-senate-ai-insight-forum-tracker/ ).
I lack the words to express what I feel every time I'm reminded of this. Not good things.
As I understand it, the basic notion is that many/most/all? decision problems worth solving are situated in so-called large worlds, a term from Leonard Savage (1) referring, roughly speaking, to contexts with uncertainties that cannot be quantified. A very common approach to such problems is to first replace them with small world proxies, apply straightforward statistical methods to the proxy (since those work in small worlds), and then presume that whatever the statistical method outputs is applicable in the large world. It’s rare to encounter a piece of work in that vein that acknowledges the translation going on, let alone that this translation might be unjustified (2). Ecological rationality avoids making this move, and instead grapples with large worlds directly.
Ecological rationality emerged from the broader tradition of bounded rationality, and in particular the work of Herbert Simon on that topic (3). The work in this vein tends to focus on algorithms and heuristics and how well they function, though there are folks, including Henry Brighton and Peter Todd, who work on formulating a theory of rationality that includes ecological rationality, which would insulate against critiques that though heuristics might be useful they're not theoretically grounded and therefore their use is unjustified at that level.
Bounded rationality is familiar to me. I read quite a bit of Simon’s work in graduate school, and much of what I did with coevolutionary algorithms implicitly has a bounded rationality component, though I didn’t explicitly frame it that way (4). I like the term “ecological rationality” better. “Bounded” implies there’s a veil over unbounded rationality that could, at least in principle, be pierced, given enough effort (5). “Ecological” brings a wholly different set of associations, focusing more directly on the character of the problems rationality is meant to be addressing, namely how to successfully exist in a complex and dynamic environment.
One of the more fascinating findings from ecological rationality work is the “less is more” phenomenon. Basically, under some conditions, not using all the data available to you produces comparable or even better results than using all of it. These findings fly directly in the face of the prejudices of the “big data”/“surveillance capitalism” era we’re currently in. They are evidence that you don’t need big data, and you don’t need wide surveillance, to achieve your goals. They’re evidence that massive compute power applied to massive data sets can produce outcomes that are worse at the task they’re intended for than much simpler, easier to understand, and less wasteful methods. This observation might sound counterintuitive. I think that reflects what Brighton (I think?) termed “the bias bias” (6) but also reflects just how normalized this class of reasoning error has become.
I haven’t yet squared these ideas with so-called “double descent” and the relatively recent (2018-ish) consensus in machine learning that the bias-variance tradeoff fails to account for why deep learning methods are successful. The bias-variance tradeoff factors into the arguments in favor of ecological rationality, so it is an important consideration. My first take: double descent doubles down on both the large→small world conversion error as well as the bias bias. It treats training data--which suffers from the problems that arise from the large→small world conversion--as a gold standard. It hypertrains a very low-bias model, which arises from the bias bias. It then tries to apply the trained model in the large world. What I haven’t fully squared in my head is where the issues manifest. Very low bias models like deep neural networks trained on massive datasets with double descent achieve ground-breaking levels of performance on a whole slew of tasks. What’s wrong with that? We might be getting into a regime analogous to photorealistic images: we know the real world is not pixelated, but once pixel density is high enough our eyes can’t tell there are pixels so what difference does that make?
One thing that stood out for me, which hadn't really sunk in quite the same way for me before: longtermists cite things like runaway #AI and bio-engineered pathogens as so-called "existential risks" that might cause human extinction, but they downplay environmental degradation as a non-existential risk. Yet, experience is the reverse of this: we have exactly zero examples of AI causing the extinction of a species and few/zero examples of a bio-engineered pathogen causing a species extinction (1), whereas we have piles and piles of examples of species extinctions caused by environmental changes. In fact, we have loads of extinctions we ourselves caused via our alterations of Earth's environment in only the last few decades! We don't even have to resort to the fossil record, which includes many more examples, to make the case; we can look at recent, carefully-documented studies using modern techniques.
Of the many flaws with #longtermism, which the essay goes to pains to spell out, this one really nags at me. Longtermism being a goofy worldview held by wealthy and powerful people is concerning enough; the fact that its primary proponents say things running directly opposite to reality makes it very dangerous, in my view. I think this is a tell.
#TESCREAL #AI #AGI #EA #longtermism
(1) I'm hedging there because I am not knowledgeable enough to say with certainty whether anyone's ever engineered a pathogen that did cause an extinction event. I can't imagine something like this happening often if it has, though.
The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.
One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce.
Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour.