29 Sep 25

16 Sep 25

Insightful notes on the use of AI in coding, in a nutshell, own the code and look for (high gradient) opportunities.

by sebastien 3 months ago

14 Aug 25

Why is it that we craft our work so AI can understand but we do not do so for people?

It’s like we have an expectation people should put forth the effort to understand, but AI has hard limitations, so we have to take extra steps for AI but can be lazy for others.

by ciwchris 4 months ago

But what they cannot do is maintain clear mental models.

LLMs get endlessly confused: they assume the code they wrote actually works; when test fail, they are left guessing as to whether to fix the code or the tests; and when it gets frustrating, they just delete the whole lot and start over.

This is exactly the opposite of what I am looking for.

by ciwchris 4 months ago

13 Aug 25

I believe that language models aren’t world models. It’s a weak claim — I’m not saying they’re useless, or that we’re done milking them. It’s also a fuzzy-sounding claim — with its trillion weights, who can prove that there’s something an LLM isn’t a model of?

by mlb 4 months ago saved 2 times
Tags:

22 Jul 25

A great write-up on the “My AI Skeptic Friends Are All Nuts” article, with some really solid thinking on the linguistic & logical shenanigans in AI hype talk.

by mlb 5 months ago saved 2 times

21 Jun 25

On the very likely outcomes of the impacts LLMs are having on software engineers and their craft.

by mlb 6 months ago saved 2 times

05 May 25

Arguments against the use LLMs for homework.

by mlb 7 months ago saved 3 times

07 Jan 23

Gary Marcus expresses his concerns that current research on AI is no longer focused on building machines that solve problems in ways that have to do with human intelligence. But merely on using massive amounts of data – often derived from human behavior – as a substitute for intelligence.

by mlb 3 years ago
Tags: