31 Oct 25
29 Sep 25
- https://antropia.studio/blog/to-ai-or-not-to-ai/
16 Sep 25
Insightful notes on the use of AI in coding, in a nutshell, own the code and look for (high gradient) opportunities.
23 Aug 25
22 Aug 25
14 Aug 25
Why is it that we craft our work so AI can understand but we do not do so for people?
It’s like we have an expectation people should put forth the effort to understand, but AI has hard limitations, so we have to take extra steps for AI but can be lazy for others.
But what they cannot do is maintain clear mental models.
LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over.
This is exactly the opposite of what I am looking for.
13 Aug 25
I believe that language models aren’t world models. It’s a weak claim — I’m not saying they’re useless, or that we’re done milking them. It’s also a fuzzy-sounding claim — with its trillion weights, who can prove that there’s something an LLM isn’t a model of?
01 Aug 25
22 Jul 25
A great write-up on the “My AI Skeptic Friends Are All Nuts” article, with some really solid thinking on the linguistic & logical shenanigans in AI hype talk.
21 Jun 25
On the very likely outcomes of the impacts LLMs are having on software engineers and their craft.
16 May 25
05 May 25
Arguments against the use LLMs for homework.
25 Apr 25
25 Mar 25
27 Feb 25
07 Jan 23
Gary Marcus expresses his concerns that current research on AI is no longer focused on building machines that solve problems in ways that have to do with human intelligence. But merely on using massive amounts of data – often derived from human behavior – as a substitute for intelligence.