buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Mars does not have a magnetosphere. Any discussion of humans ever settling the red planet can stop right there, but of course it never does.From https://defector.com/neither-elon-musk-nor-anybody-else-will-ever-colonize-mars...
The South Pole is around 2,800 meters above sea level, and like everywhere else on Earth around 44 million miles closer to the sun than any point on Mars. It sits deep down inside the nutritious atmosphere of a planet teeming with native life. Compared to the very most hospitable place on Mars it is an unimaginably fertile Eden. Here is a list of the plant-life that grows there: Nothing. Here is a list of all the animals that reproduce there: None.
Life on earth writ large, the grand network of life, is a greater and more dynamic terraforming engine than any person could ever conceive. It has been operating ceaselessly for several billions of years. It has not yet terraformed the South Pole or the summit of Mount Everest. On what type of timeframe were you imagining that the shoebox of lichen you send to Mars was going to transform Frozen Airless Radioactive Desert Hell into a place where people could grow wheat?
If you're not familiar, there are zealots, some of whom have been given a national platform by US Senator Chuck Schumer (!), who use "P(doom)" as a shorthand for the probability that artificial intelligence will rise up and cause human extinction. This is peak scientism, wherein one uses scientific-sounding language (like "probability") in support of what amounts to a religious belief. The wide use of phrases like this is why I don't hesitate to use words lke "zealot" to describe such folks.
It's exactly the same reasoning error that lies behind trying to count how many angels can dance on the head of a pin.
One thing that stood out for me, which hadn't really sunk in quite the same way for me before: longtermists cite things like runaway #AI and bio-engineered pathogens as so-called "existential risks" that might cause human extinction, but they downplay environmental degradation as a non-existential risk. Yet, experience is the reverse of this: we have exactly zero examples of AI causing the extinction of a species and few/zero examples of a bio-engineered pathogen causing a species extinction (1), whereas we have piles and piles of examples of species extinctions caused by environmental changes. In fact, we have loads of extinctions we ourselves caused via our alterations of Earth's environment in only the last few decades! We don't even have to resort to the fossil record, which includes many more examples, to make the case; we can look at recent, carefully-documented studies using modern techniques.
Of the many flaws with #longtermism, which the essay goes to pains to spell out, this one really nags at me. Longtermism being a goofy worldview held by wealthy and powerful people is concerning enough; the fact that its primary proponents say things running directly opposite to reality makes it very dangerous, in my view. I think this is a tell.
#TESCREAL #AI #AGI #EA #longtermism
(1) I'm hedging there because I am not knowledgeable enough to say with certainty whether anyone's ever engineered a pathogen that did cause an extinction event. I can't imagine something like this happening often if it has, though.
One thing that stood out for me, which hadn't really sunk in quite the same way for me before: longtermists cite things like runaway #AI and bio-engineered pathogens as so-called "existential risks" that might cause human extinction, but they downplay environmental degradation as a non-existential risk. Yet, experience is the reverse of this: we have exactly zero examples of AI causing the extinction of a species and few/zero examples of a bio-engineered pathogen causing a species extinction (1), whereas we have piles and piles of examples of species extinctions caused by environmental changes. In fact, we have loads of extinctions we ourselves caused via our alterations of Earth's environment in only the last few decades! We don't even have to resort to the fossil record, which includes many more examples, to make the case; we can look at recent, carefully-documented studies using modern techniques.
Of the many flaws with #longtermism, which the essay goes to pains to spell out, this one really nags at me. Longtermism being a goofy worldview held by wealthy and powerful people is concerning enough; the fact that its primary proponents say things running directly opposite to reality makes it very dangerous, in my view. I think this is a tell.
#TESCREAL #AI #AGI #EA #longtermism
(1) I'm hedging there because I am not knowledgeable enough to say with certainty whether anyone's ever engineered a pathogen that did cause an extinction event. I can't imagine something like this happening often if it has, though.
One thing that stood out for me, which hadn't really sunk in quite the same way for me before: longtermists cite things like runaway #AI and bio-engineered pathogens as so-called "existential risks" that might cause human extinction, but they downplay environmental degradation as a non-existential risk. Yet, experience is the reverse of this: we have exactly zero examples of AI causing the extinction of a species and few/zero examples of a bio-engineered pathogen causing a species extinction (1), whereas we have piles and piles of examples of species extinctions caused by environmental changes. In fact, we have loads of extinctions we ourselves caused via our alterations of Earth's environment in only the last few decades! We don't even have to resort to the fossil record, which includes many more examples, to make the case; we can look at recent, carefully-documented studies using modern techniques.
Of the many flaws with #longtermism, which the essay goes to pains to spell out, this one really nags at me. Longtermism being a goofy worldview held by wealthy and powerful people is concerning enough; the fact that its primary proponents say things running directly opposite to reality makes it very dangerous, in my view. I think this is a tell.
#TESCREAL #AI #AGI #EA #longtermism
(1) I'm hedging there because I am not knowledgeable enough to say with certainty whether anyone's ever engineered a pathogen that did cause an extinction event. I can't imagine something like this happening often if it has, though.
Iain McGilchrist, in The Master and His Emissary, offers many clinical examples of people with debilitating (or temporarily induced) impairments to the right brain hemisphere failing to perceive embodiedness. To these people, bodies are just collections of parts, sometimes wooden and lifeless. The world to them is abstract, floating, as it were, out of any physical context. Thought within these right-impaired brains systematically favors certain, linear constructs extending indefinitely—as opposed to cyclical, tangled, or ambiguous realities. Death is unacceptable: does not compute.
The techno-optimist cheerleader, therefore, not only reminds me of a dangerous ideological zealot—ready to eliminate any number of species for the next awesome gadget—but also of a tragically-impaired creature missing half his brain.From https://dothemath.ucsd.edu/2023/10/our-ugly-magnificence/
#TESCREAL #longtermism #effectivealtruisim #effectiveaccelerationism #eacc #technooptimism