buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #agi

1 ★ 0 ↺

[?]Anthony » 🌐
@abucci@buc.ci

The actual Singularity is the point in time when claims that we're approaching the Singularity are made so frequently that no-one is able to understand or assess them.


    [?]Charlie McHenry » 🌐
    @CharlieMcHenry@connectop.us

    Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans - I have no doubt at all that once we’ve achieved and systems become self-learning, they will decide we humans are the problem - and take appropriate action. Are we ready to be ruled by AI overlords? futurism.com/future-society/mo

      AodeRelay boosted

      [?]Wulfy—Speaker to the machines » 🌐
      @n_dimension@infosec.exchange

      @jackwilliambell

      Look at a chicken.
      Look at a human.
      Now, look at KFC and Costco chook.

      When the / comes, that will be the power relationship equivalent.

      The fact you've collected a billion sea shells will be immaterial to the machine.

        AodeRelay boosted

        [?]Nora is Fed Up » 🌐
        @noracodes@tenforward.social

        > The A in AGI stands for Ads! It's all ads!! Ads that you can't even block because they are BAKED into the streamed probabilistic word selector purposefully skewed to output the highest bidder's marketing copy.

        Ossama Chaib, "The A in AGI stands for Ads"

        This is another huge reason I refuse to "build skills" around LLMs. The models everyone points to as being worthwhile are either not public or prohibitively expensive to run locally, so incorporating them into my workflow means I'd be making my core thought processes very vulnerable to enshittification.

          4 ★ 0 ↺

          [?]Anthony » 🌐
          @abucci@buc.ci

          Since I'm job and work hunting I tend to see the absurd new job titles that are bouncing around in the tech sector. The latest, which I've seen twice today, is "artificial general intelligence engineer" or some permutation thereof. I do my best to spend the minimum possible time on these and have no guess about whether they're legitimate.


            AodeRelay boosted

            [?]Miguel Afonso Caetano » 🌐
            @remixtures@tldr.nettime.org

            "Dwarkesh Patel: You would think that to emulate the trillions of tokens in the corpus of Internet text, you would have to build a world model. In fact, these models do seem to have very robust world models. They’re the best world models we’ve made to date in AI, right? What do you think is missing?

            Richard Sutton [Here is my favorite part, - B.R.]: I would disagree with most of the things you just said. To mimic what people say is not really to build a model of the world at all. You’re mimicking things that have a model of the world: people. I don’t want to approach the question in an adversarial way, but I would question the idea that they have a world model. A world model would enable you to predict what would happen. They have the ability to predict what a person would say. They don’t have the ability to predict what will happen.

            What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did."

            withoutwhy.substack.com/p/ai-e

              AodeRelay boosted

              [?]Renatomancer » 🌐
              @Renatomancer@vmst.io

              AodeRelay boosted

              [?]Miguel Afonso Caetano » 🌐
              @remixtures@tldr.nettime.org

              Lots of sloppy/lazy thinking and flawed reasoning here, but generally a good read. Just because GPUs are no longer improving and scaling is not enough to achieve AGI, that doesn't mean AGI is impossible. It just means that LLMs per se are not the way to go.

              "In summary, AGI, as commonly conceived, will not happen because it ignores the physical constraints of computation, the exponential costs of linear progress, and the fundamental limits we are already encountering. Superintelligence is a fantasy because it assumes that intelligence can recursively self-improve without bound, ignoring the physical and economic realities that constrain all systems. These ideas persist not because they are well-founded, but because they serve as compelling narratives in an echo chamber that rewards belief over rigor.

              The future of AI will be shaped by economic diffusion, practical applications, and incremental improvements within physical constraints — not by mythical superintelligence or the sudden emergence of AGI. The sooner we accept this reality, the better we can focus on building AI systems that actually improve human productivity and well-being."

              timdettmers.com/2025/12/10/why

                AodeRelay boosted

                [?]Mike McCaffrey » 🌐
                @mikemccaffrey@wandering.shop

                I like seeing how @pluralistic is refining his anti arguments over time. In this interview, I love the idea of reframing "hallucinations" as "defects", the analogy that trying to get out of is like breeding faster horses and expecting one to give birth to a locomotive, and ridiculing the premise that "if you teach enough words to the word-guessing machine it will become God."

                youtu.be/9LgLg0zlbJQ

                  AodeRelay boosted

                  [?]Simon Roses Femerling » 🌐
                  @simonroses@infosec.exchange

                  How do we compare AI vendors? which is best?

                    3 ★ 0 ↺

                    [?]Anthony » 🌐
                    @abucci@buc.ci

                    AGI is just 1 trilllion 4-bit floating point numbers in a trenchcoat.


                      [?]Yogi Jaeger » 🌐
                      @yoginho@spore.social

                      "The real danger isn’t that machines will become intelligent—it’s that we’ll mistake impressive computation for understanding and surrender our judgment to those who control the servers." Mike Brock

                      notesfromthecircus.com/p/why-i

                      is not achievable with existing architectures.

                      This is a good analysis.

                      Title of a Substack essay: "Why I'm Betting Against the AGI Hype" by Mike Brock.
Shows a picture of a little girl holding the hand of a robot.

                      Alt...Title of a Substack essay: "Why I'm Betting Against the AGI Hype" by Mike Brock. Shows a picture of a little girl holding the hand of a robot.

                        [?]Wulfy—Speaker to the machines » 🌐
                        @n_dimension@infosec.exchange

                        @AAKL @kdkorte @Reuters

                        "But there will always be a human who will be able to outsmart their coveted superintelligent systems."

                        Ask the Chickens who hang at KFC how they are outsmarting humans.

                        Because that's the difference in IQ+ we are talking about between the smartest human and
                        Which notably is not that much...

                        ... Besides, building is not the stated goal of films like ...
                        Its to build a machine smart enough to research ...
                        ... A machine researcher.

                          AodeRelay boosted

                          [?]DJM (freelance for hire) » 🌐
                          @cybeardjm@masto.ai

                          Pay us or else...

                          David Sacks on X

"According to today’s WSJ, Al-related investment accounts for half of GDP growth. A reversal would risk recession. We can’t afford to go backwards."

                          Alt...David Sacks on X "According to today’s WSJ, Al-related investment accounts for half of GDP growth. A reversal would risk recession. We can’t afford to go backwards."

                            [?]Michael Blume » 🌐
                            @BlumeEvolution@sueden.social

                            Sah & sehe zu wenig nachhaltige Konzern-Geschäftsmodelle für & . „Ich glaube, dass in wenigen Jahren die Blase platzen wird - es ist zu viel Geld, zu viel Energie im System. Die Vorherrschaft der aktuellen Konzerne, die global alle Daten sammeln, wird enden. Wir werden stattdessen zu dezentralen Open-Source-Anwendungen und KI-Systemen kommen", sagte Blume am Donnerstag beim Bodensee Business Forum der "Schwäbischen Zeitung" in Friedrichshafen. (2/2) newsroom.de/news/aktuelle-meld

                              AodeRelay boosted

                              [?]Court Cantrell does not comply » 🌐
                              @courtcan@mastodon.social

                              As so many dystopian sci-fi books and movies have warned us, Artificial Intelligence *might* actually destroy the world.

                              But it won't be because the AI wants to eradicate us.

                              It'll be because we expected the AI to save us.










                              !

                                AodeRelay boosted

                                [?]grantpotter » 🌐
                                @grantpotter@mastodon.social

                                "If you're building a conspiracy theory, you need a few things in the mix: a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world." technologyreview.com/2025/10/3

                                  [?]jbz » 🌐
                                  @jbz@indieweb.social

                                  *grabs a bib*

                                  "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity."

                                  slashdot.org/story/25/11/06/23

                                    AodeRelay boosted

                                    [?]Wulfy—Speaker to the machines » 🌐
                                    @n_dimension@infosec.exchange

                                    Whichever way you cut the logic.

                                    We can either have dumb
                                    or Smart, motivated AI that sooner or later will compete with biologicals.

                                    The is the biologicals giving birth to and then being discarded, like the shell of an egg...

                                      AodeRelay boosted

                                      [?]teledyn 𓂀 » 🌐
                                      @teledyn@mstdn.ca

                                      @ai6yr

                                      Nice to see them stress the lack of 'Intelligence' but they still refuse to ask about the elephant: since it doesn't "provide answers" then "why the massive massive push for ubiquity?" will have an answer that includes what the system ACTUALLY DOES DO, which is, here and elsewhere, very well documented.

                                      =

                                      Guardian neglects to report the outrage of Weizenbaum's secretary when she was told her chats were logged. She said she felt violated, and, unlike ChatGPT et al, ELIZA logs never left the lab or llm scanned for incriminating behaviour …

                                        [?]DJM (freelance for hire) » 🌐
                                        @cybeardjm@masto.ai

                                        OK, so will destroy jobs in the West but create plenty in - oh, you mean gig workers probably?

                                        Tapping into Africa’s 230-million AI-powered jobs opportunity
                                        news.microsoft.com/source/emea

                                          [?]DJM (freelance for hire) » 🌐
                                          @cybeardjm@masto.ai

                                          "Africa is ready to unlock growth and productivity from gen AI"
                                          (May 2025 - McKinsey)

                                          Still pushing the myth of Africa "leapfrogging" into the digital revolution, without the investments for a proper infrastructure (while they loot the continent...).

                                          mckinsey.com/capabilities/quan

                                            [?]Miguel Afonso Caetano » 🌐
                                            @remixtures@tldr.nettime.org

                                            "In this article, I thought it would be worth looking at the views that Yudkowsky has espoused over the years. He’s suggested that murdering children up to 6 years old may be morally acceptable, that animals like pigs have no conscious experiences, that ASI (artificial superintelligence) could destroy humanity by synthesizing artificial mind-control bacteria, that nearly everyone on Earth should be “allowed to die” to prevent ASI from being built in the near future, that he might have secretly bombed Wuhan to prevent the Covid-19 pandemic, that he once “acquired a sex slave … who will earn her orgasms by completing math assignments,” and that he’d be willing to sacrifice “all of humanity” to create god-like superintelligences wandering the universe.

                                            Yudkowsky also believes — like, really really believes — that he’s an absolute genius, and said back in 2000 that the reason he wakes up in the morning is because he’s “the only one who” can “save the world.” Yudkowsky is, to put it mildly, an egomaniac. Worse, he’s an egomaniac who’s frequently wrong despite being wildly overconfident about his ideas. He claims to be a paragon of rationality, but so far as I can tell he’s a fantastic example of the Dunning-Kruger effect paired with messianic levels of self-importance. As discussed below, he’s been prophesying the end of the world since the 1990s, though most of his prophesied dates of doom have passed without incident.

                                            So, let’s get into it! I promise this will get weirder the more you read."

                                            realtimetechpocalypse.com/p/el

                                              6 ★ 8 ↺

                                              [?]Anthony » 🌐
                                              @abucci@buc.ci

                                              Over 2,000 years ago Ovid wrote about a sculptor who fell in love with a statue he carved, imputing the ability to love to an arrangement of rock. Today we impute the ability to think to an arrangement of silicon. Stories of breathing life into non-life have been with us for a very long time, yet somehow we're stuck in the same place.


                                                0 ★ 3 ↺

                                                [?]Anthony » 🌐
                                                @abucci@buc.ci

                                                Resistance to the coup is the defense of the human against the digital and the democratic against the oligarchic.
                                                From https://snyder.substack.com/p/of-course-its-a-coup

                                                Defense of the human against the digital has been my mission for some time. Resisting the narratives about how "reason", "pass the Turing test", "diagnose illnesses", are "better than humans" in various ways are part of it. Resisting the false narrative that we're on the verge of discovering is part of it. Allowing these false stories to persist and spread means succumbing to very dark anti-human forces. We're seeing some of the consequences now, and we're seeing how far this might go.


                                                  3 ★ 4 ↺

                                                  [?]Anthony » 🌐
                                                  @abucci@buc.ci

                                                  Abstract: This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
                                                  Emphasis added. From https://intelligence.org/files/PredictingAI.pdf
                                                  Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster, 52–75. Pilsen: University of West Bohemia.

                                                  Note that this is from 2012.

                                                  One wonders what exactly an expert is when it comes to AI, if their track records are so consistently poor and unresponsive to their own failures.


                                                    19 ★ 9 ↺
                                                    Anthony boosted

                                                    [?]Anthony » 🌐
                                                    @abucci@buc.ci

                                                    "No one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down. Why on earth would anyone in his right mind suppose a computer simulation of mental processes actually had mental processes?"
                                                    --John Searle


                                                      2 ★ 6 ↺

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      One good reason to think that generative AI is about labor discipline: Larry Summers is on the OpenAI board ( https://fortune.com/2023/11/22/larry-summers-named-board-openai-sam-altman-return-ceo-ai/ ; note the threatening headline) . In his public speaking Larry Summers is the walking embodiment of austerity economics. He bangs the labor discipline drum at every opportunity. Just one well-known example from 2022: https://fortune.com/2022/06/21/larry-summers-calls-for-high-unemployment-to-curb-inflation/

                                                      5 years at 6% unemployment or 1 year at 10%: That’s what Larry Summers says we’ll need to defeat inflation

                                                      This is coded economist-speak for: we need to force a bunch of people who currently have jobs to be fired or laid off, and keep them in that state for 1-5 years, in order to achieve an economic goal. That is the essence of austerity economics and labor discipline.

                                                      Never mind that more honest economists have identified corporate markups as the primary driver of inflation, which immediately implies that forcing people out of work would worsen inflation, not reduce it. Summers knows this, or at least has access to this research. But he's an austerity ideologue.

                                                      So. Why is this person, who has no background in artificial intelligence, on the board of a company that claims to be building artificial general intelligence? In what universe does that make any sense? Well, it makes perfect sense in the universe where one of the goals for OpenAI's technology is enhanced labor discipline. Larry Summers has a track record and professional network for achieving exactly that; he'd know how to further that mission and knows the powerful people best-situated to help.


                                                        2 ★ 5 ↺
                                                        Mario Angst boosted

                                                        [?]Anthony » 🌐
                                                        @abucci@buc.ci

                                                        Absolutely bizarre that a company publicly claiming to be on the verge of making one of the most remarkable advances in computer science in generations--artificial general intelligence--is adding bog standard features present in every calendaring app or productivity suite for decades. I'm reading this as an indication of where they are: trying to make what they currently have "sticky" to improve their DAU/MAU numbers because they don't anticipate their actual product, LLMs, will achieve that.

                                                        ChatGPT now lets you schedule reminders and recurring tasks: https://techcrunch.com/2025/01/14/chatgpt-now-lets-you-schedule-reminders-and-recurring-tasks



                                                          [?]Iris van Rooij 💭 » 🌐
                                                          @Iris@scholar.social

                                                          4 ★ 1 ↺
                                                          Stomata boosted

                                                          [?]Anthony » 🌐
                                                          @abucci@buc.ci

                                                          The redefinist fallacy occurs when, instead of working through a question to find a real answer, we simply define one possibility to be the answer.
                                                          Something to think about whenever someone tells you that ChatGPT is better than human beings at some task or another.


                                                            1 ★ 0 ↺

                                                            [?]Anthony » 🌐
                                                            @abucci@buc.ci

                                                            Generative AI will someday free humanity from the overwhelming burden of having original thoughts.


                                                              4 ★ 1 ↺
                                                              St. Chris boosted

                                                              [?]Anthony » 🌐
                                                              @abucci@buc.ci

                                                              Random thought of the day: the tech nerd view of the world that includes and humanity colonizing space is painted as the pinnacle of innovation, but is in fact quite regressive and reactionary. This is bog standard religious colonialism in 21st-century packaging. It is an attempt to return to the modern period. But we simply cannot do that, for a myriad reasons.

                                                                6 ★ 6 ↺

                                                                [?]Anthony » 🌐
                                                                @abucci@buc.ci

                                                                Anyone concerned with the present dangers of deployment should listen to @emilymbender@dair-community.social and @alex@dair-community.social deconstruct Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States in their excellent podcast: https://www.buzzsprout.com/2126417/15280581-episode-34-senate-dot-roadmap-dot-final-dot-no-really-dot-docx-june-3-2024

                                                                "Automated Immiseration" is how I read, at a coarse level, what's being proposed. Austerity politics handed down by machines.

                                                                The endgame of austerity economics has historically been some variation of fascism. Clara Mattei's meticulously-researched book The Capital Order: How Economists Invented Austerity and Paved the Way to Fascism is a deep dive into how this has played out, especially after WWI. Mattei brings receipts: she has direct quotes indicating economists at the time knew full well what they were doing. The purpose of austerity measures, as spelled out explicitly by the people who invented them, is to discipline citizens and laborers--to keep them from getting any ideas about receiving too many benefits from government--and to re-affirm the position of capital holders at the top of the pecking order.

                                                                One comment I had about the "reskilling"/"upskiling" part of the conversation: I don't have the references on hand, but I believe it's been amply documented that "upskilling" almost never happens. It's a word that's thrown around as a salve, to reduce alarm about intended market destruction and job loss.

                                                                If upskilling were a serious part of this so-called roadmap, there'd be more specific plans for whose jobs will be displaced, which colleges and universities will educate these displaced folks, how that education will be funded, and which jobs are waiting for them on the other side. The upskilling plans, if serious, would be as detailed and spelled out as the AI/corporate plans are. If the upskilling plans were serious, the presidents of Ivy League and state universities, vocational institutions like coding bootcamps, and other educational institutions would have been present to share their insights and views, especially as regards the realism of re-training as many people as this roadmap and the rhetoric of boosters suggests might be displaced. It'd be wise to include high school educators as well, since such a significant shift in the workforce affects students in high school as well. I didn't read the backgrounds on all 150-ish participants in these Senate forums, but I didn't see or hear about a large contingent of educators among them. As it stands the roadmap document only vaguely refers to "legislation" that doesn't yet exist.

                                                                Frankly, I see little that sounds serious in this "roadmap" aside from the danger it represents. It reads like a whitepaper a corporate middle manager would pound out (using probably) in an afternoon. Emily and Alex referred to Chuck Schumer as "an fanboy", and I think that's what this document reflects. This is The Boring Company of government policy proposals.


                                                                  3 ★ 1 ↺
                                                                  AI Channel boosted

                                                                  [?]Anthony » 🌐
                                                                  @abucci@buc.ci

                                                                  A Handy AI Glossary

                                                                  = Automated Immiseration
                                                                  = Generative Automated Immiseration
                                                                  = Automated General Immiseration
                                                                  = Large Labor-exploitation Model
                                                                  = Machine Labor-exploitation

                                                                    8 ★ 12 ↺

                                                                    [?]Anthony » 🌐
                                                                    @abucci@buc.ci

                                                                    A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

                                                                    If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

                                                                    The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

                                                                    If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not develop those capacities when experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could?

                                                                    Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe even a Nobel-prize-caliber one. It would would also be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else. In the absence of tangible results, it's quite literally magical thinking to assert neural networks have this capacity that even human beings lack.


                                                                      2 ★ 1 ↺

                                                                      [?]Anthony » 🌐
                                                                      @abucci@buc.ci

                                                                      I've been an atheist since an early age, which means I don't believe in a , or for that matter, and maintain skepticism about in general (since in the United States technology is a religion).

                                                                      I believe each of us has one finite life here on Earth and it's up to us to make of it what we will, with a little help from our friends if we're lucky. We're not going to migrate to Mars, outer space generally, or a computer simulation. Those are just stand-ins for heaven and if you don't sacrifice your Earthly life for the promise of heaven why would you sacrifice it for the promise of living on Mars or the hope that someday you'll upload your consciousness to the cloud? Hyperintelligent computer programs aren't going to solve our problems for us. They're just stand ins for Jesus, God(s), angels, or some other benevolent supernatural beings who intervene on our behalf when we screw up. No, we have this place, and we have this time, and we have ourselves and the people around us. We should cherish them--including ourselves--not pretend they don't matter as we chase yet another iteration of the same pipe dreams.

                                                                        2 ★ 1 ↺
                                                                        planetscape boosted

                                                                        [?]Anthony » 🌐
                                                                        @abucci@buc.ci

                                                                        The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

                                                                        More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

                                                                        From https://theconversation.com/weve-been-here-before-ai-promised-humanlike-machines-in-1958-222700

                                                                        Not to nitpick, but I'd argue "In a lot of ways, not much". Not much substantively, anyway. We have 66 years of Moore's Law and data gathering, which has made the biggest difference by far. We have some important advances in how to train ML models, though I'd argue lots of these fall on the engineering side of things more than the deep understanding of how stuff works side of things.

                                                                        This critique is not meant to diminish how difficult and important any of these particular advances were. Rather, it's that I believe scale alone is not what lies between machine learning and human intelligence. We should not be claiming that we're moments away from artificial general intelligence. I believe that however remarkable the outputs of or or what have you may be, they are still just as far from human-quality intelligence as Rosenblatt's perceptron was.

                                                                        In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.
                                                                        We've been hearing this same song for a long time.

                                                                        (Before diving into my mentions to explain AI and ML to me, please be aware that I have a PhD in computer science from 2007. My PhD advisor made significant advances in aritificial neural networks research in the 1980s. I closely read the original papers on this subject, including Rosenblatt's and all three volumes of the PDP collection, and surveyed some of them in the introductory chapter of my dissertation. I built large scale ML systems in industry before it was "a thing". I'm happy to have a conversation about all this stuff but variations of "you're wrong" are unwelcome no matter how politely phrased. Thanks).


                                                                          14 ★ 8 ↺

                                                                          [?]Anthony » 🌐
                                                                          @abucci@buc.ci

                                                                          Sorry. Your Car Will Never Drive You Around.

                                                                          Brutal takedown of the bullsh&*# that is "self-driving cars": https://www.youtube.com/watch?v=2DOd4RLNeT4

                                                                          It's a long video but frankly you can get the gist of most of it by scanning over the chapter titles. "Hitting Fake Children". "Hitting Real Children". "FSD Expectations" is a long list of the various lies has told about "full self driving" Teslas. Also the "Openpilot" chapter has a picture of Elon Musk's face as a dartboard.

                                                                          The endless hype and full-on lies of the self-driving-car con from roughly 2016 to 2020 resembles the about like going on right now. If you've been in this industry long enough and have been honest with yourself about it you've seen all this before. Until something significant changes we really ought to view anything coming out of the tech sector with deep suspicion (https://bucci.onl/notes/Another-AI-Hype-Cycle).


                                                                            1 ★ 0 ↺

                                                                            [?]Anthony » 🌐
                                                                            @abucci@buc.ci

                                                                            The self-driving car industry is going to be a $200 billion-ish corporate experiment that demonstrates driving a car is AI-complete, which I could have told you decades ago for many orders of magnitude less investment.


                                                                              1 ★ 2 ↺
                                                                              planetscape boosted

                                                                              [?]Anthony » 🌐
                                                                              @abucci@buc.ci

                                                                              A computer's successful performance is often taken as evidence that it or its programmer understand a theory of its performance. Such an inference is unnecessary and, more often than not, is quite mistaken. The relationship between understanding and writing thus remains as problematical for computer programming as it has always been for writing in any other form.
                                                                              --Joseph Weizenbaum, Computer Power and Human Reason


                                                                                9 ★ 7 ↺

                                                                                [?]Anthony » 🌐
                                                                                @abucci@buc.ci

                                                                                Regarding the last boost, the link to the preprint is https://osf.io/preprints/psyarxiv/4cbuv . The link in the post seems to be broken right now.

                                                                                This is a very nice talk (given by @Iris@scholar.social ) and paper arguing, among other things, that the claims we hear about artificial general intelligence being right around the corner are bunk because is a computationally intractable learning problem. That is, given a bunch of real world data and a computer program that can potentially learn human caliber cognitive abilities from that data, the complexity (roughly speaking, runtime) of this program is at least in the NP-hard class. It reminds me of some of the results in PAC learning, which have a similar flavor (this is Leslie Valiant's probably approximately correct framework I'm referring to).

                                                                                Often, problems in the NP-hard class take so long to solve the heat death of the universe will likely occur before we solve them. There are nuances to that, but I think the compelling point is that anyone making claims about being around the corner is making extraordinary claims and they'd better bring some damn good proof because all indications say that they're full of it. It's time to put this hype to rest.

                                                                                  9 ★ 7 ↺

                                                                                  [?]Anthony » 🌐
                                                                                  @abucci@buc.ci

                                                                                  Regarding the last boost, the link to the preprint is https://osf.io/preprints/psyarxiv/4cbuv . The link in the post seems to be broken right now.

                                                                                  This is a very nice talk (given by @Iris@scholar.social ) and paper arguing, among other things, that the claims we hear about artificial general intelligence being right around the corner are bunk because is a computationally intractable learning problem. That is, given a bunch of real world data and a computer program that can potentially learn human caliber cognitive abilities from that data, the complexity (roughly speaking, runtime) of this program is at least in the NP-hard class. It reminds me of some of the results in PAC learning, which have a similar flavor (this is Leslie Valiant's probably approximately correct framework I'm referring to).

                                                                                  Often, problems in the NP-hard class take so long to solve the heat death of the universe will likely occur before we solve them. There are nuances to that, but I think the compelling point is that anyone making claims about being around the corner is making extraordinary claims and they'd better bring some damn good proof because all indications say that they're full of it. It's time to put this hype to rest.

                                                                                    3 ★ 3 ↺
                                                                                    St. Chris boosted

                                                                                    [?]Anthony » 🌐
                                                                                    @abucci@buc.ci

                                                                                    This post I'm replying to is a great example of the kind of irrational, manipulative rhetoric that surrounds and these days. Unscientific/unreasonable notions like "doom" intermixed with vague threats. Threats such as it's "vital" we talk about this--or else! We risk "oversimplifying" if we don't entertain this stuff. It's "key" we talk about it--or else! Nevermind that any of us could invent supernatural forces to be terrified of, and use exactly the same "reasoning" to demand that everyone pay attention and worry. We might as well call this "Artificial General Cthulu" at that point.

                                                                                    If you haven't been following it, the cryptocurrency grift is full of the same style of expression. It's the future of finance! You have to get into crytpo--or else! This is how con artists and snake oil salesmen talk, not how reasonable people talk. The exaggerated risks and benefits are used to manipulate mainstream discussion in ways that distract us from what's happening (a financial fraud similar to a Ponzi scheme in the case of cryptocurrency, or a run-of-the-mill tech hype cycle in the case of AI).

                                                                                    I'm not going to indulge a discussion of on these terms, but for the sake of anyone else who might be reading, here are a few points to consider:
                                                                                    - P(doom) isn't hyperbolic. It's impossible to define, in the same way that number of angels dancing on the head of a pin is impossible to define. It might be entertaining to think about but it has no place in a serious discussion with real-world consequences. Calling it hyperbolic gives it credit it does not deserve
                                                                                    - For all practical purposes, "AI safety" == minimizing P(doom) and similarly undefinable quantities. Furthermore, this term was invented in response to the legitimate work of AI ethics folks who were concerned about the real, immediate harms AI is causing. "AI safety" is a misdirection
                                                                                    - It is reasonable and arguably necessary to reject and dismiss notions like P(doom) from serious discussions of topics like policy, for the same reasons we reject other unscientific, supernatural, or otherwise unsupportable nonsense
                                                                                    - This isn't skepticism. One can be skeptical about a scientific notion that doesn't have strong supporting evidence. One can be skeptical of a logical or philosophical claim that hasn't been fully elaborated. One can be skeptical that Howard The Duck is the worst (and therefore best) movie ever made. P(doom) is nothing like any of these. It's not a good shorthand or model of an actual scientific, logical, or philosophical concept. It's not fun to think about. Skepticism does not apply: I reject it fully.

                                                                                    cc: @pirateagi@botsin.space

                                                                                      4 ★ 1 ↺
                                                                                      The Luddite boosted

                                                                                      [?]Anthony » 🌐
                                                                                      @abucci@buc.ci

                                                                                      You cannot falsify statements involving "P(doom)", by the very definition of the word "doom".

                                                                                      If you're not familiar, there are zealots, some of whom have been given a national platform by US Senator Chuck Schumer (!), who use "P(doom)" as a shorthand for the probability that artificial intelligence will rise up and cause human extinction. This is peak scientism, wherein one uses scientific-sounding language (like "probability") in support of what amounts to a religious belief. The wide use of phrases like this is why I don't hesitate to use words lke "zealot" to describe such folks.

                                                                                      It's exactly the same reasoning error that lies behind trying to count how many angels can dance on the head of a pin.


                                                                                        2 ★ 2 ↺

                                                                                        [?]Anthony » 🌐
                                                                                        @abucci@buc.ci

                                                                                        P(doom), the "probability" that or will doom humanity, is a quantity that / zealots seem to care a lot about. It is the quintessential example of the reasoning error that ecological rationality calls out. There is no way to quantify the likelihood of "doom", no matter how you define that word, and it's pure nonsense to try or pretend you have. Doom is a large world phenomenon. The people credited with inventing the frameworks and techniques that allow you to even think in terms of P(doom), like Leonard Savage, explicitly called out just this sort of application as preposterous.

                                                                                        Nevertheless, US Senate majority leader Chuck Schumer invited a bunch of tech CEOs and technologists, among whom numbered many and similar kinds of zealots, to opine on their personal assessments of P(doom) in a legitimate hearing in front of the US Congress (there's good reporting on this here: https://www.techpolicy.press/us-senate-ai-insight-forum-tracker/ ).

                                                                                        I lack the words to express what I feel every time I'm reminded of this. Not good things.

                                                                                          2 ★ 0 ↺

                                                                                          [?]Anthony » 🌐
                                                                                          @abucci@buc.ci

                                                                                          I've spent some time over the last week or so reading up on ecological rationality, which was new to me. I have to thank @yoginho@spore.social for this, since I encountered the notion through a preprint he and coauthors recently posted. I wanted to put some notes here. I should probably have made this a blog post but 🤷 I feel a little silly for not having encountered ecological rationality before, since the decision problems inherent in coevolutionary algorithms--my specialty--closely resemble the decision problems ecological rationalists discuss.

                                                                                          As I understand it, the basic notion is that many/most/all? decision problems worth solving are situated in so-called large worlds, a term from Leonard Savage (1) referring, roughly speaking, to contexts with uncertainties that cannot be quantified. A very common approach to such problems is to first replace them with small world proxies, apply straightforward statistical methods to the proxy (since those work in small worlds), and then presume that whatever the statistical method outputs is applicable in the large world. It’s rare to encounter a piece of work in that vein that acknowledges the translation going on, let alone that this translation might be unjustified (2). Ecological rationality avoids making this move, and instead grapples with large worlds directly.

                                                                                          Ecological rationality emerged from the broader tradition of bounded rationality, and in particular the work of Herbert Simon on that topic (3). The work in this vein tends to focus on algorithms and heuristics and how well they function, though there are folks, including Henry Brighton and Peter Todd, who work on formulating a theory of rationality that includes ecological rationality, which would insulate against critiques that though heuristics might be useful they're not theoretically grounded and therefore their use is unjustified at that level.

                                                                                          Bounded rationality is familiar to me. I read quite a bit of Simon’s work in graduate school, and much of what I did with coevolutionary algorithms implicitly has a bounded rationality component, though I didn’t explicitly frame it that way (4). I like the term “ecological rationality” better. “Bounded” implies there’s a veil over unbounded rationality that could, at least in principle, be pierced, given enough effort (5). “Ecological” brings a wholly different set of associations, focusing more directly on the character of the problems rationality is meant to be addressing, namely how to successfully exist in a complex and dynamic environment.

                                                                                          One of the more fascinating findings from ecological rationality work is the “less is more” phenomenon. Basically, under some conditions, not using all the data available to you produces comparable or even better results than using all of it. These findings fly directly in the face of the prejudices of the “big data”/“surveillance capitalism” era we’re currently in. They are evidence that you don’t need big data, and you don’t need wide surveillance, to achieve your goals. They’re evidence that massive compute power applied to massive data sets can produce outcomes that are worse at the task they’re intended for than much simpler, easier to understand, and less wasteful methods. This observation might sound counterintuitive. I think that reflects what Brighton (I think?) termed “the bias bias” (6) but also reflects just how normalized this class of reasoning error has become.

                                                                                          I haven’t yet squared these ideas with so-called “double descent” and the relatively recent (2018-ish) consensus in machine learning that the bias-variance tradeoff fails to account for why deep learning methods are successful. The bias-variance tradeoff factors into the arguments in favor of ecological rationality, so it is an important consideration. My first take: double descent doubles down on both the large→small world conversion error as well as the bias bias. It treats training data--which suffers from the problems that arise from the large→small world conversion--as a gold standard. It hypertrains a very low-bias model, which arises from the bias bias. It then tries to apply the trained model in the large world. What I haven’t fully squared in my head is where the issues manifest. Very low bias models like deep neural networks trained on massive datasets with double descent achieve ground-breaking levels of performance on a whole slew of tasks. What’s wrong with that? We might be getting into a regime analogous to photorealistic images: we know the real world is not pixelated, but once pixel density is high enough our eyes can’t tell there are pixels so what difference does that make?



                                                                                          (1) Savage’s 1954 The Foundations of Statistics is arguably the single biggest reason for the rise and spread of Bayesian statistics.
                                                                                          (2) The procedure is basically to throw away the actually hard parts of the problem, solve an oversimplified version, and then pretend that the solution to the oversimplified problem is a solution to the original problem whose hard parts were ignored. Savage himself called out this pitfall, but that doesn’t seem to stop self-described Bayesians from falling right into it. This class of reasoning error has become so normalized it’s hardly noticed or commented on.
                                                                                          (3) The same Simon who wrote The Sciences of the Artificial and collaborated with Alan Newell on Logic Theorist and the notion of general problem solvers.
                                                                                          (4) I did do some collaborative work on Bayes optimal coevolutionary algorithms whose optimality properties stem directly from the small worlds in which they operate. We took pains to spell out our assumptions in the papers we wrote about it so I hope the smallness is clear enough in that work.
                                                                                          (5) I have zero doubt there are folks who believe or will render concerns about bounded rationality moot, what with its presumed god-like reasoning power and instant access to every piece of information it could ever need.
                                                                                          (6) “Bias bias” refers to a bias in thinking that prefers reducing the statistical bias of a method--usually by making it more complicated and more “true to life”--over other ways of making a method more successful. Simple methods applied to small and focused datasets can outperform complex methods applied to large sprawling datasets especially when deep uncertainties abound. Bias bias leads researchers to prefer ever-increasing complexity in models in spite of this.

                                                                                            7 ★ 6 ↺

                                                                                            [?]Anthony » 🌐
                                                                                            @abucci@buc.ci

                                                                                            I found this essay to be a nice treatment, with a good bibliography, of the deeply flawed basis of : https://www.radicalphilosophy.com/commentary/the-toxic-ideology-of-longtermism . The essay defines and situates longtermism with respect to effective altruism. If you're not already familiar with those terms it contains enough background to get you up to speed.

                                                                                            One thing that stood out for me, which hadn't really sunk in quite the same way for me before: longtermists cite things like runaway and bio-engineered pathogens as so-called "existential risks" that might cause human extinction, but they downplay environmental degradation as a non-existential risk. Yet, experience is the reverse of this: we have exactly zero examples of AI causing the extinction of a species and few/zero examples of a bio-engineered pathogen causing a species extinction (1), whereas we have piles and piles of examples of species extinctions caused by environmental changes. In fact, we have loads of extinctions we ourselves caused via our alterations of Earth's environment in only the last few decades! We don't even have to resort to the fossil record, which includes many more examples, to make the case; we can look at recent, carefully-documented studies using modern techniques.

                                                                                            Of the many flaws with , which the essay goes to pains to spell out, this one really nags at me. Longtermism being a goofy worldview held by wealthy and powerful people is concerning enough; the fact that its primary proponents say things running directly opposite to reality makes it very dangerous, in my view. I think this is a tell.

                                                                                            (1) I'm hedging there because I am not knowledgeable enough to say with certainty whether anyone's ever engineered a pathogen that did cause an extinction event. I can't imagine something like this happening often if it has, though.

                                                                                              3 ★ 4 ↺
                                                                                              planetscape boosted

                                                                                              [?]Anthony » 🌐
                                                                                              @abucci@buc.ci

                                                                                              From https://danmcquillan.org/house_of_lords.html
                                                                                              The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.

                                                                                              From https://danmcquillan.org/ai_thatcherism.html
                                                                                              One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce.

                                                                                              Same source:
                                                                                              Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour.

                                                                                              Dan McQuillan (@danmcquillan@kolektiva.social) has been excellent on AI .


                                                                                                0 ★ 0 ↺

                                                                                                [?]Anthony » 🌐
                                                                                                @abucci@buc.ci

                                                                                                AGI, or "Actual General Intelligence", has not yet been discovered. We ought to begin the search for it immediately.