24 Oct 25
Psst, kid, want some cheap and small LLMs? This blog post provides a comprehensive guide on how to set up and use llama.cpp, a C++ library, to efficiently run large language models (LLMs) locally on consumer hardware.
by tmfnk
2 months ago
02 Sep 25
05 Jun 25
This is a really interesting look in how much of a pain in the ass adding features to C and C++ is.
This reminds me quite a lot of similarly felt issues in the Linux kernel.
by linkraven
6 months ago
05 Apr 25
Great read about the peculiarities of bytecode VM design in Golang.
by qiu
8 months ago