22 Aug 25
The paper (arXiv 2020, also AI review 2023) opens up with discussing recent high-profile AI debates: the Montréal AI Debate and the AAAI 2020 fireside chat with Kahneman, Hinton, LeCun, and Bengio. A consensus seems to be emerging: for AI to be robust and trustworthy, it must combine learning with reasoning. Kahneman’s “System 1 vs. System 2” dual framing of cognition maps well to deep learning and symbolic reasoning. And AI needs both.
Paper (and post) introduced to me the idea of the logic tensor network, which genuinely looks quite different interesting. I hope to see adoption of the architecture in the future.
The paper (2023) argues for integrating two historically divergent traditions in artificial intelligence (neural networks and symbolic reasoning) into a unified paradigm called Neurosymbolic AI. It argues that the path to capable, explainable, and trustworthy artificial intelligence lies in marrying perception-driven neural systems with structure-aware symbolic models.
20 Aug 25
Another idiot with a trillion souls in his back pocket.
19 Aug 25
AI isn’t just impacting how we write — it’s changing how we speak and interact with others. And there’s only more to come.
via: https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/
Indeed, it seems that one of the many offerings of generative AI is a kind of psychosis-as-a-service. If you are genuinely AGI-pilled—a term for those who believe that machine-born superintelligence is coming, and soon—the rational response probably involves some combination of building a bunker, quitting your job, and joining the cause. […] It’s hard to care about tariffs or authoritarian encroachment or getting a degree if you believe that the world as we know it is about to change forever.
via: https://lobste.rs/s/y7gw78/ai_is_mass_delusion_event
15 Aug 25
13 Aug 25
I’m Alice, a technical AI safety writer. I write the ML Safety Newsletter and my personal writing is on LessWrong. I have a background in technical ML, but pivoted to communications because I think this is where I can do the most good.
07 Aug 25
Features an MIT student from my class.
04 Aug 25
Soon, we’ll feed requirements to AI and get working software without writing code. But when cars replaced horses, the horses never had to debug the cars.
28 Jul 25
A computer can never be held accountable
Therefore a computer must never make a management decision
Mark Weiser gets it. He understands exactly why the current gen AI/LLM moment is so frustrating to me. We should be building better software, not creating more software to control our existing mid software for us.
For example: being alerted of a potential collision
- agent: “collision, collision, go right and down”
- ubicomp: background presentation of airspace information for continuous spatial awareness, as in everyday life. You’ll no more run into another airplane than you would try to walk through a wall.
Damn; he’s even anti-racist:
It obsessively fascinates
- the human-like machine to which we give life
- the perfect, all-powerful, slave
- be careful when appealing to ancient prejudice
via: http://geoffreylitt.com/2025/07/27/enough-ai-copilots-we-need-ai-huds.html
Funny how Litt takes away a totally different message from this slide deck than I do.
10 Jun 25
Today we have thousands of apps to choose from, but it’s difficult to craft our own custom tools that do exactly what we need. Geoffrey Litt’s research explores malleable software: approaching software that feels more like a Lego set that anyone can combine to create their own tools, without programming. This talk will feature demonstrations of malleable software tools developed in contexts from travel planning to collaborative writing. It will also discuss how AI might enable a Cambrian explosion of custom tools created by non-programmers, and what kinds of new software environments will be needed to take advantage of that new power.
Great overview of Geoffrey’s work over the past few years and how LLMs could fit into the future of end-user programming.