17 Aug 25
15 Aug 25
14 Aug 25
A new library in Python designed specifically for working with LLM. It does seem quite refined in it’s thinking, and goes beyond a simple adapter or collection thereof.
Why is it that we craft our work so AI can understand but we do not do so for people?
It’s like we have an expectation people should put forth the effort to understand, but AI has hard limitations, so we have to take extra steps for AI but can be lazy for others.
But what they cannot do is maintain clear mental models.
LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over.
This is exactly the opposite of what I am looking for.
13 Aug 25
Or “How to recognize speech using common sense”.
I believe that language models aren’t world models. It’s a weak claim — I’m not saying they’re useless, or that we’re done milking them. It’s also a fuzzy-sounding claim — with its trillion weights, who can prove that there’s something an LLM isn’t a model of?
I’m Alice, a technical AI safety writer. I write the ML Safety Newsletter and my personal writing is on LessWrong. I have a background in technical ML, but pivoted to communications because I think this is where I can do the most good.
12 Aug 25
It’s a good idea to use AI to review the security of your code, although you may get surprising results!