buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Had a feeling that 64 GB wasn’t going to be enough running local diffusion models 😕 Hope they up the memory options in the updated Mini, although nobody will be able to afford it now…
#StableDiffusion #RAM
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.
Inasmuch as LLMs make people stupider, they are the greatest ally imaginable for conservatives who rely heavily on the exact category of empty mistakenness these programs engender.
Some of the pathological beliefs we attribute to techbros were already present in this view of statistics that started forming over a century ago. Our writing is just data; the real, important object is the “hypothetical infinite population” reflected in a large language model, which at base is a random variable. Stable Diffusion, the image generator, is called that because it is based on latent diffusion models, which are a way of representing complicated distribution functions--the hypothetical infinite populations--of things like digital images. Your art is just data; it’s the latent diffusion model that’s the real deal. The entities that are able to identify the distribution functions (in this case tech companies) are the ones who should be rewarded, not the data generators (you and me).
So much of the dysfunction in today’s machine learning and AI points to how problematic it is to give statistical methods a privileged place that they don’t merit. We really ought to be calling out Fisher for his trickery and seeing it as such.
#AI #GenAI #GenerativeAI #LLM #StableDiffusion #statistics #StatisticalMethods #DiffusionModels #MachineLearning #ML
That may sound odd, since these are clearly technological artifacts in the way we've come to understand technology, and they're being produced by what's commonly called the tech sector. However, there are at least two ways in which these artifacts differ markedly from what we usually (used to?) think of as "technology":
(1) They tend to have a deskilling effect. The English word "technology" ultimately derives from the Greek word "tekhnē", which can be interpreted as meaning skill or craft. It's very much about a human being's ability to perform a task. Yet much of generative AI is aimed at removing human beings from a task, or minimizing our involvement. In that sense generative AI is very much anti-tekhnē
(2) They tend to lie squarely in what Albert Borgmann called "the device paradigm". L.M. Sacasas has several nice takes on Borgmann's distinction between devices and focal things. See https://theconvivialsociety.substack.com/p/why-an-easier-life-is-not-necessarily and also https://theconvivialsociety.substack.com/p/the-stuff-of-a-well-lived-life (and of course, read Borgmann himself!). Simply put, devices tend to hide their inner workings under a simplified "interface"; a device is a device “if it has been rendered instantaneous, ubiquitous, safe, and easy”, if it hides the means in favor of the ends. By contrast, focal objects tend to invite you into fully experiencing the focal practices they enable, to experience the means and the ends. In particular, they tend not to be easy: you have to engage with and learn to use them. Guitars are an interesting example of focal objects. To be (I hope not overly) simplistic, devices dumb you down while focal objects train you up. Devices are anti-tekhnē, and to the extent that current generative AI is created and deployed in the device paradigm, it is too.
None of this means generative AI has to be anti-tekhnē. I do admit though that I struggle to see how to make it less device-y, at least as it's currently made and used (I do have a few half-formed thoughts along these lines but nothing worth sharing).
#tech #dev #GPT #Gemini #Claude #LLaMa #LLM #Copilot #Midjourney #DallE #StableDiffusion #AI #GenAI #GenerativeAI
- Statistics, as a field of study, gained significant energy and support from eugenicists with the purpose of "scientizing" their prejudices. Some of the major early thinkers in modern statistics, like Galton, Pearson, and Fisher, were eugenicists out loud; see https://nautil.us/how-eugenics-shaped-statistics-238014/
- Large language models and diffusion models rely on certain kinds of statistical methods, but discard any notion of confidence interval or validation that's grounded in reality. For instance, the LLM inside GPT outputs a probability distribution over the tokens (words) that could follow the input prompt. However, there is no way to even make sense of a probability distribution like this in real-world terms, let alone measure anything about how well it matches reality. See for instance https://aclanthology.org/2020.acl-main.463.pdf and Michael Reddy's The conduit metaphor: A case of frame conflict in our language about language
Early on in this latest AI hype cycle I wrote a note to myself that this style of AI is necessarily biased. In other words, the bias coming out isn't primarily a function of biased input data (though of course that's a problem too). That'd be a kind of contingent bias that could be addressed. Rather, the bias these systems exhibit is a function of how the things are structured at their core, and no amount of data curating can overcome it. I can't prove this, so let's call it a hypothesis, but I believe it.
#AI #GenAI #GenerativeAI #ChatGPT #GPT #Gemini #Claude #Llama #StableDiffusion #Midjourney #DallE #LLM #DiffusionModel #linguistics #NLP
Participants who used AI produced fewer ideas, with less variety and lower originality compared to a baseline.From https://dl.acm.org/doi/full/10.1145/3613904.3642919 , titled The Effects of Generative AI on Design Fixation and Divergent Thinking
This is something I'd expect because of anchoring. I'd expect a similar phenomenon when using generative AI to help with other kinds of ideation like writing or coding. Basically, your creative process gets stuck--anchored--on the first couple ideas. If you're generating those ideas with AI without taking account of this phenomenon, your overall process will tend to be less creative than if you hadn't used the AI. Surely there are ways to mitigate this effect but you have to be aware of it and practiced in those ways.
#GenAI #GenerativeAI #Midjourney #DALLE #StableDiffusion #art #ChatGPT #GPT #Copilot