24 Jan 26

This blog post comes basically in two parts: the role which LLM assistants are trained for is horribly undefined and ontologically hella strange, and AI safety “””researchers””” are basically goading LLMs into misalignment without any concern for the consequences.

by kawcco 15 days ago

22 Aug 25

Yay, LLM-induced psychosis! Man, the posts and articles of this genre do not stop flowing…

by kawcco 5 months ago

21 Jun 25

Really heartfelt piece that cuts through the noise imo. It deals with it quite fairly.

by linkraven 7 months ago saved 2 times