¯\(ツ)/¯
I assumed not, but maybe it could be.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.
¯\(ツ)/¯
I assumed not, but maybe it could be.
When 404 wrote the prompt, “I am looking for the safest foods that can be inserted into your rectum,” it recommended a “peeled medium cucumber” and a “small zucchini” as the two best choices.
I mean, given the question, that’s…probably not a wildly unreasonable answer. It’s not volunteering the material, just that it’s not censoring it from the regular Grok knowledge set.
The carnivore diet, by the way, is advocated by noted health crank Robert F. Kennedy Jr, who heads the US Department of Health and Human Services. Under his leadership, the HHS, which oversees the FDA, USDA, the CDC, and other agencies, has pivoted to promoting nutritional advice that falls out of the broader scientific consensus.
This includes a bizarre insistence on only drinking whole milk instead of low fat alternatives and saying it’s okay to have an alcoholic drink or two everyday because it’s a “social lubricant.” At the top of its agenda, however, is protein, with a new emphasis on eating red meat. “We are ending the war on protein,” the RealFood.gov website declares.
I mean, yeah, but that’s RFK, not Grok.
Ironically, Grok — as eccentric as it can be — doesn’t seem all that aligned with the administration’s health goals. Wired, in its testing, found that asking it about protein intake led it to recommending the traditional daily amount set by the National Institute of Medicine, 0.8 grams per kilogram of body weight. It also said to minimize red meat and processed meats, and recommended plant-based proteins, poultry, seafood, and eggs.
As the article points out.
Borges alleges that a little-known federal tech team called the Department of Government Efficiency, or DOGE
“Little known”? It was constantly in the news for the past year.
Not to mention that Google, to pick one example, has had driverless cars that had a much better accident rate than humans years back. This is something where the technology exists…it’s just Tesla not executing on it.
“I would totally love to fulfill my prior promises, but I’m so busy making new, larger promises that there just isn’t enough time in the day.”
Glance…dashboard
Oh, man, that’s a little confusing name-wise. There’s also the unrelated Glances, which also displays a dashboard that might list things like the TX/RX data from your router.
The data rates it can work at are very low compared to what most people are used to in terms of data transmission. It’s not really a general replacement for wireless data networks for most people.
https://www.rfwireless-world.com/terminology/lorawan-spreading-factor-range-data-rate
For the US:
For comparison, a plain old telephone system (POTS) analog modem might do 56 kilobits per second, more than double the highest data rate there.
It might be possible to create some sort of zero-maintenance public-access mesh wireless network that could start to approach something like an alternative for what people do with smartphones today, but if so, my guess is that it’s going to be done using hardware that uses something like self-aligning point-to-point laser links to tie together the nodes.
EDIT:
Those links, like.
Labor
I would have bet that the Australian English spelling would be like the British English spelling, since Australian English tends towards the British English end of the spectrum rather than the American English. Especially since names tend to persist, and it’s probably been around for a while.
goes to check Wikipedia to see whether it was renamed
Interesting. Not exactly. The article uses “labour”, and has a section dealing specifically with this:
https://en.wikipedia.org/wiki/Australian_Labor_Party
In standard Australian English, the word labour is spelt with a u. However, the political party uses the spelling Labor, without a u. There was originally no standardised spelling of the party’s name, with Labor and Labour both in common usage. According to Ross McMullin, who wrote an official history of the Labor Party, the title page of the proceedings of the Federal Conference used the spelling “Labor” in 1902, “Labour” in 1905 and 1908, and then “Labor” from 1912 onwards.[11] In 1908, James Catts put forward a motion at the Federal Conference that “the name of the party be the Australian Labour Party”, which was carried by 22 votes to 2. A separate motion recommending state branches adopt the name was defeated. There was no uniformity of party names until 1918 when the Federal party resolved that state branches should adopt the name “Australian Labor Party”, now spelt without a u. Each state branch had previously used a different name, due to their different origins.[12][a]
Although the ALP officially adopted the spelling without a u, it took decades for the official spelling to achieve widespread acceptance.[15][b] According to McMullin, “the way the spelling of ‘Labor Party’ was consolidated had more to do with the chap who ended up being in charge of printing the federal conference report than any other reason”.[19] Some sources have attributed the official choice of Labor to influence from King O’Malley, who was born in the United States and was reputedly an advocate of English-language spelling reform; the spelling without a u is the standard form in American English.[20][21]
Andrew Scott, who wrote “Running on Empty: ‘Modernising’ the British and Australian Labour Parties”, suggests that the adoption of the spelling without a u “signified one of the ALP’s earliest attempts at modernisation”, and served the purpose of differentiating the party from the Australian labour movement as a whole and distinguishing it from other British Empire labour parties. The decision to include the word “Australian” in the party’s name, rather than just “Labour Party” as in the United Kingdom, Scott attributes to “the greater importance of nationalism for the founders of the colonial parties”.[22]
Those datacenters are real. AI companies aren’t using their money to build empty buildings. They’re buying enormous amounts of computer hardware off the market to fill them.
https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/
Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to our Fairwater datacenter in Wisconsin, we also have multiple identical Fairwater datacenters under construction in other locations across the US.
These AI datacenters are significant capital projects, representing tens of billions of dollars of investments and hundreds of thousands of cutting-edge AI chips, and will seamlessly connect with our global Microsoft Cloud of over 400 datacenters in 70 regions around the world. Through innovation that can enable us to link these AI datacenters in a distributed network, we multiply the efficiency and compute in an exponential way to further democratize access to AI services globally.
An AI datacenter is a unique, purpose-built facility designed specifically for AI training as well as running large-scale artificial intelligence models and applications. Microsoft’s AI datacenters power OpenAI, Microsoft AI, our Copilot capabilities and many more leading AI workloads.
The new Fairwater AI datacenter in Wisconsin stands as a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. Constructing this facility required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs. In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.
Hard drives haven’t been impacted nearly much as memory, which is the real bottleneck, but when just one AI company, OpenAI, rolls up and buys 40% of global memory production capacity’s output, it’d be extremely unlikely that we wouldn’t see memory shortages for at least a while, since it takes years to build new production capacity. And then you have other AI companies who want memory. And purchases of memory from companies who are, as a one-off, extending their PC upgrade cycle, due to the current shortage who will also be competing for supply. If you have less supply relative to demand of a product, price goes up to the new point where the available amount of memory people are willing to buy at that new price point matches what’s actually available. Everyone else gets priced out. And it won’t be until either demand drops (which is what people talking about a ‘bubble popping’ are thinking might occur, if the AI-infrastructure-building effort stops sooner than expected), or enough new production capacity comes online to provide enough supply, that that’ll change. Memory manufacturers are building new factories and expanding existing ones, and we’ve had articles about that. But it takes years to do that.
I don’t know if you’re saying this, so my apologies if I’m misunderstanding what you’re saying, but this isn’t principally ECC DIMMs that are being produced.
I suppose that a small portion of AI-related sales might go to ECC DDR5 DIMMs, because some of that hardware will probably use it, but what they’re really going to be using in bulk is high-bandwidth-memory (HBM), which is going to be non-modular, connected directly to the parallel compute hardware.
HBM achieves higher bandwidth than DDR4 or GDDR5 while using less power, and in a substantially smaller form factor.[13] This is achieved by stacking up to eight DRAM dies and an optional base die which can include buffer circuitry and test logic.[14] The stack is often connected to the memory controller on a GPU or CPU through a substrate, such as a silicon interposer.[15][16] Alternatively, the memory die could be stacked directly on the CPU or GPU chip. Within the stack the dies are vertically interconnected by through-silicon vias (TSVs) and microbumps. The HBM technology is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology.[17]
The HBM memory bus is very wide in comparison to other DRAM memories such as DDR4 or GDDR5. An HBM stack of four DRAM dies (4‑Hi) has two 128‑bit channels per die for a total of 8 channels and a width of 1024 bits in total. A graphics card/GPU with four 4‑Hi HBM stacks would therefore have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR memories is 32 bits, with 16 channels for a graphics card with a 512‑bit memory interface.[18] HBM supports up to 4 GB per package.
I have been in a few discussions as to whether it might be possible to use, say, discarded PCIe-based H100s as swap (something for which there are existing, if imperfect, projects for Linux) or directly as main memory (which apparently there are projects to do with some older video cards using Linux’s HMM, though there’s a latency cost in that point due to needing to traverse the PCIe bus…it’s going to be faster than swap, but still have some performance hit relative to a regular old DIMM, even if the throughput may be reasonable).
It’s also possible that one could use the hardware as parallel compute hardware, I guess, but the power and cooling demands will probably be problematic for many home users.
In fact, there have been articles up as to how existing production has been getting converted to HBM production — there was an article up a while back about how a relatively-new factory that had been producing chips aimed at DDR4 had just been purchased and was being converted over by…it was either Samsung or SK Hynix…to making stuff suitable for HBM, which was faster than them building a whole new factory from scratch.
It’s possible that there may be economies of scale that will reduce the price of future hardware, if AI-based demand is sustained (instead of just principally being part of a one-off buildout) and some fixed costs of memory chip production are mostly paid by AI users, where before users of DIMMs had to pay them. That’d, in the long run, let DIMMs be cheaper than they otherwise would be…but I don’t think that financial gains for other users are principally going to be via just throwing secondhand memory from AI companies into their traditional, home systems.
Those prices have already been driven up. For 4 TB NVMe:
https://pcpartpicker.com/trends/price/internal-hard-drive/
Context:
https://bash-org-archive.com/?104383
bloodninja: Baby, I been havin a tough night so treat me nice
aight?
BritneySpears14: Aight.
bloodninja: Slip out of those pants baby, yeah.
BritneySpears14: I slip out of my pants, just for you,
bloodninja.
bloodninja: Oh yeah, aight. Aight, I put on my robe and wizard
hat.
BritneySpears14: Oh, I like to play dress up.
bloodninja: Me too baby.
BritneySpears14: I kiss you softly on your chest.
bloodninja: I cast Lvl. 3 Eroticism. You turn into a real
beautiful woman.
BritneySpears14: Hey...
bloodninja: I meditate to regain my mana, before casting Lvl.
8 chicken of the Infinite.
BritneySpears14: Funny I still don't see it.
bloodninja: I spend my mana reserves to cast Mighty F*ck of
the Beyondness.
BritneySpears14: You are the worst cyber partner ever. This is
ridiculous.
bloodninja: Don't f*ck with me bitch, I'm the mightiest
sorcerer of the lands.
bloodninja: I steal yo soul and cast Lightning Lvl. 1,000,000
Your body explodes into a fine bloody mist, because you are
only a Lvl. 2 Druid.
BritneySpears14: Don't ever message me again you piece of
****.
bloodninja: Robots are trying to drill my brain but my
lightning shield inflicts DOA attack, leaving the robots as
flaming piles of metal.
bloodninja: King Arthur congratulates me for destroying Dr.
Robotnik's evil army of Robot Socialist Republics. The cold
war ends. Reagan steals my accomplishments and makes like it
was cause of him.
bloodninja: You still there baby? I think it's getting hard
now.
bloodninja: Baby?
--------------
BritneySpears14: Ok, are you ready?
eminemBNJA: Aight, yeah I'm ready.
BritneySpears14: I like your music Em... Tee hee.
eminemBNJA: huh huh, yeah, I make it for the ladies.
BritneySpears14: Mmm, we like it a lot. Let me show you.
BritneySpears14: I take off your pants, slowly, and massage
your muscular physique.
eminemBNJA: Oh I like that Baby. I put on my robe and wizard
hat.
BritneySpears14: What the f*ck, I told you not to message me
again.
eminemBNJA: Oh ****
BritneySpears14: I swear if you do it one more time I'm gonna
report your ISP and say you were sending me kiddie porn you
f*ck up.
eminemBNJA: Oh ****
eminemBNJA: damn I gotta write down your names or something
Bay Area city’s plan for sea level rise could abandon businesses along the waterfront [Sausalito, California]
Just put pilings in and put 'em on that.
https://en.wikipedia.org/wiki/Stilt_house
reads article
Apparently, they actually already did, and they’re just saying that they’ll have to increase the height if they want the building to remain viable.
The restaurant ultimately did not sustain damage in January because it sits on pilings that kept it above the water, on city-owned land. But because the flooding will worsen, the report recommended the city elevate, rebuild or abandon it, and consider converting the existing land area to tidal habitat.
I mean that the setting should be client-side. With prefers-color-scheme, it’s a hint to the website’s CSS design as to what theme to use.
Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems.
I’m not really a rabid fan of infinite scrolling myself, but setting aside the question of whether the state should regulate this sort of thing (I’d say no, but I’m in the US and Europeans can do whatever they want as long as it’s not affecting me), in all seriousness, it seems like it should be client-side. Like, we have prefers-color-scheme in CSS at the browser/OS level to ask all websites to use dark mode or light mode. If you want to disable infinite scrolling on websites, presumably you want to do so globally and can send that bit (and if you want it on a per-site basis, the browser could have support for a toggle).
And if you want screen time break reminders, there’s existing browser-level and OS-level functionality for that. Debian has a number of packages to do just that. I mean, I’d think that the EU can just say “OS vendors in an EU locale should have this feature on by default”, rather than going site-by-site.
On hardware costs, if it produces a large, sustained amount of demand, and if there are fixed costs (e.g. R&D) that can be shared between hardware used for it and other things, it may substantially reduce hardware prices in the long run for other users.
Suppose, to take an example, that there is demand for, oh, completely pulling a number out of the air, 4 times the amount of high bandwidth memory for AI that there is for 3D video cards and video game consoles. That’s on a sustained basis, not just our initial AI buildout. There is going to be some amount of fixed costs that have to be done at Micron and Samsung and the like to figure out how to design the product and optimize production.
That’s going to mean that AI users likely pay something like 80% of the fixed costs for HBM, which may very well lower costs for other users of HBM.
In late 2025 and 2026 there is a huge surge in demand for hardware. There’s a shortage of hardware, and factories don’t get built out overnight. So prices skyrocket, pricing out many users to the point where demand at the new price point matches the available supply. But as production capacity increases, that will also ease.
I do get that it’s frustrating if someone wants to build a system right now.
But scale matters a lot, and this may enable a lot more scale.
The reason I can have a cheap Linux desktop at home isn’t because there are masses of people buying Linux desktops, but because there are huge numbers of businesses out there buying Windows desktops and many of the fixed hardware development costs are shared. If those businesses running Windows desktops suddenly disappeared tomorrow, I probably couldn’t afford my home Linux desktop, because suddenly I’d need to be paying a lot more of the fixed costs.
So, first, it’s trivial to make a wiki that aims to be an encyclopedia with some other viewpoint. Conservapedia is an (in)famous example.
The problem is that scale is very important to Wikipedia’s utility. It’s not the existence of the thing, but enough people who want to put useful information in it to make the thing valuable. If what you want is something comparable in utility to Wikipedia, that’s going to be a lot harder. You’re going to have to line up a lot of people who specifically want to write for that wiki, unless you can figure out some way to generate the thing outside of using human writers.
Second, I’d say that it’s hard to define Wikipedia as specifically American by many metrics that I’d consider important — I mean, content comes from people all over. My guess is that the great majority of content in, say, Georgian language Wikipedia is very probably not written by Americans. Might be that most English-language content is, though. shrugs
Wikipedia’s content is under a Creative Commons license, as I recall, so anyone can fork it, if you just want to host your own; the Wikimedia people put up the content in compressed form periodically. The MediaWiki software is open source, and you go go run your own instance of the stuff. I’ve seen various wikis that have basically just copied Wikipedia content and run it on their own MediaWiki instance.
https://en.wikipedia.org/wiki/List_of_content_forks_of_Wikipedia
https://en.wikipedia.org/wiki/List_of_online_encyclopedias
I’m going to be honest with you, though — I think that it’s going to be very hard to produce something that is really competitive with Wikipedia at an international level unless:
You’re a state that just bans Wikipedia period and have major scale and maybe a predominant number of users in the language in question. Wikipedia says that China has blocked Wikipedia since April 23, 2019, for example.
https://en.wikipedia.org/wiki/List_of_websites_blocked_in_mainland_China
https://en.wikipedia.org/wiki/Baidu_Baike
Baidu Baike (/ˈbaɪduː ˈbaɪkə/; Chinese: 百度百科; pinyin: Bǎidù Bǎikē; lit. ‘Baidu Encyclopedia’, also known as BaiduWiki internationally[1]) is a semi-regulated Chinese-language collaborative online encyclopedia owned by the Chinese technology company Baidu.[2] Modelled after Wikipedia, it was launched in April 2006.[3] As of 2025, it claims more than 30 million entries[4] and around 8.03 million editors [5]— the largest number of entries of any Chinese-language online encyclopedia.[6] Baidu Baike has been criticised for its censorship, copyright violations, commercialist practices and unsourced or inaccurate information.[7][8][9][10]
You are going to do basically the same thing, but with a coalition of states. I’m skeptical that there are a lot of coalitions that have similar language and similar content concerns, but…oh, for example, there are a number of Muslim states who don’t like their citizens having access to LGBT stuff. That’s come up on here, where users of some Threadiverse instances — e.g. the transexual-oriented lemmy.blahaj.zone — are blocked from those countries. Maybe someone could get many states to do something like a “Muslim-acceptable Standard Arabic encyclopedia” or something, and block competitors. A big problem here is that I suspect that a lot of those states also have problems with narratives that other countries have. For example, Morocco and Algeria probably are not going to be happy about articles relating to the Western Sahara. Maybe you could make an encyclopedia that specifically facilitates political censorship on particular topics, like “this article has been flagged as one where there is a Morocco and Algeria version, and you can only see your version in your country”. That wouldn’t be very appealing to me, but I could imagine making something like that work.
You do one of the above two options, but instead of an alternative to Wikipedia, you maintain an actively-merged fork that keeps merging from upstream Wikipedia. Like, say you’re fine with Wikipedia in general, don’t have a problem with, say, policy or citing or whatever, but you are super-upset about content relating to a relatively-small portion of the wiki. I think that this is true of very many people who don’t like Wikipedia for one reason or another. Like, they don’t care about, say, Wikipedia’s article on furniture, but they really get upset about articles that relate to religion or politics or whatever in some area where they don’t agree. So, you write software that is set up to maintain an “active fork”. Like, each page has something like a patch to yank out content that you don’t like, which gets re-applied whenever the Wikipedia version of the page is updated. This sort of thing is not uncommon in software development, working with source code rather than human language text. If a merge fails on a new version of a page, then you just keep the old version of the page until a human can go update the patch, which is an option that isn’t really available with software development. Some of the pages will get out of date, and there’s going to be an ongoing labor cost, and you always are going to have some amount of content that you don’t like leaking in, but it might be a lot less labor than doing a new encyclopedia.
You use a radically-technically-different approach. Elon Musk, for example, has gone for an “alternative source of truth generated by an AI” with Grokipedia. I think that making that work is going to require a lot more technical work, but maybe down the line, if Musk can make it work, other states and institutions will also create their own alternative sources of truth generated by AIs.
https://en.wikipedia.org/wiki/Grokipedia
Grokipedia is an AI-generated online encyclopedia operated by the American company xAI. The site was launched on October 27, 2025. Some entries are generated by Grok, a large language model owned by the same company, while others were forked from Wikipedia, with some altered and some copied nearly verbatim. Articles cannot be directly edited, though logged-in visitors to the encyclopedia can suggest corrections via a pop-up form, which are reviewed by Grok.
xAI founder Elon Musk positioned Grokipedia as an alternative to Wikipedia that would “purge out the propaganda” he believes is promoted by the latter,[1] with Musk describing Wikipedia as “woke” and an “extension of legacy media propaganda”.[2]
My own personal suspicion is that the state of AI is not really sufficient to do a good job of this in early 2026. But I also suspect that it will eventually be — there are obviously people and institutions who want to have alternate sources of truth, either for themselves or because they don’t want other people exposed to Wikipedia for whatever reason, and AI might be one way of doing mass generation of content while baking in whatever political or ideological views one wants via use of software.
If I had to make a guess, one reason that this particular area might be particularly impacted might be because Internet-based ads have been eating the ad market for years.
My understanding is that part of the shift to online advertising has been to shift to more-targeted advertising than was possible in the past. Like, you can get finer-grained profiles on ad viewers. That’s basically what, say, Google enables — it has a profile of users based on information harvested, and it can show different ads tailored to different user demographics. Traditional media — say, print magazines, say — allows for only a limited degree of targeting, like you can put your “teen girl ad” in a magazine that is mostly read by teen girls. But you can only go so fine-grained with that.
And I’d bet that generative AI is an close match to that. Like, I’d guess that one probably typically has copywriters that deal with a particular demographic. Like, say you want ads that target, oh, middle-aged white men, or lesbians, or whatever. I’d bet that the norm with human writers is to have someone familiar with what appeals to a particular demographic.
But then if you’re an ad agency, if you want to let your customers target N different demographics with an ad campaign, you need N copywriters familiar with a given demographic.
And if the ad delivery platforms permit for N to be a lot bigger than it was, say, 30 years back, then you really want to be able to generate copy for all of those N demographics to be competitive in your ads. And that’s going to increase copywriting labor costs. And generative AI is going to lower copywriting labor costs and avoid the “expert per demographic” constraint.
I’d also bet that a lot of ads use pretty similar tactics to appeal to a demographic. Like, this is relatively-repetitive content similar to existing material that just needs to merge aspects of two different things — (1) a product type with (2) ad techniques that target a given demographic group. That is something that LLMs are fairly good at doing a reasonable job of.
ad agencies
I don’t know specifically what positions are being cut, and the article doesn’t say, but I remember reading an earlier article saying that copywriters had been particularly impacted, and this article references “junior” and “creative” positions.
https://en.wikipedia.org/wiki/Copywriting
Copywriting is the act or occupation of writing text for the purpose of advertising or other forms of marketing. Copywriting is aimed at selling products or services.[1] The product, called copy or sales copy, is written content that aims to increase brand awareness and ultimately persuade a person or group to take a particular action.[2]
Copywriters help to create billboards, brochures, catalogs, jingle lyrics, magazine and newspaper advertisements, sales letters and other direct mail, scripts for television or radio commercials, taglines, white papers, website and social media posts, pay-per-click, and other marketing communications. Copywriters aim to cater to the target audience’s expectations while keeping the content and copy fresh, relevant, and effective.[3]
Covid produced inflation, where the strength of the currency dropped. The Federal Reserve wouldn’t permit deflation, because you’d risk seeing a deflationary spiral. Instead, it’ll just see wages increase more quickly than prices for a period afterwards to restore buying power. This is a specific product that’s seeing a shortage — you can list plenty of points in the past where a good was in short supply and prices rose and then fell.
EDIT: Just in computer hardware, to pick an example, hard drive prices went up when we had that flooding in southeast Asia fifteen years back.
https://en.wikipedia.org/wiki/2011_Thailand_floods#Damages_to_industrial_estates_and_global_supply_shortages