- Cologne, Germany
- https://pschen.de
- All languages
- ActionScript
- Arduino
- Astro
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Dart
- Dockerfile
- Elixir
- Emacs Lisp
- Fluent
- GLSL
- Go
- HTML
- Java
- JavaScript
- Jupyter Notebook
- Lua
- MDX
- Makefile
- Nix
- Nunjucks
- Nushell
- Objective-C
- PHP
- Perl
- PostScript
- PowerShell
- Processing
- Python
- R
- Rich Text Format
- Ruby
- Rust
- SCSS
- Shell
- Starlark
- Svelte
- Swift
- TeX
- TypeScript
- Vala
- Vue
- WebAssembly
- XSLT
- Zig
Starred repositories
Blazing-fast iOS SSH client with Metal GPU rendering, powered by libghostty-vt.
Resume AI coding sessions across providers: converts Codex, Claude, Gemini, and other session formats through a canonical IR so you can pick up where you left off in any tool
A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.
Multi-backend code-first parametric CAD in JavaScript/TypeScript with live params, constraints, assemblies, and exact STEP/BREP export.
The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—but scores >74% on SWE-bench verified!
Method for Long Context RLMs using verifiable Lambda Calculus
Local context manager for Claude Code and Codex with workstreams, transcript binding, and branching.
General plug-and-play inference library for Recursive Language Models (RLMs), supporting various sandboxes.
Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.
KV cache compression via block-diagonal rotation. Beats TurboQuant: better PPL (6.91 vs 7.07), 28% faster decode, 5.3x faster prefill, 44x fewer params. Drop-in llama.cpp integration.
DFlash: Block Diffusion for Flash Speculative Decoding
Fast and accurate AI powered file content types detection
Lossless DFlash speculative decoding for MLX on Apple Silicon
Exact speculative decoding on Apple Silicon, powered by MLX.
🌰 Terminal-based automated file organizer inspired by Hazel. Watch folders and organize files with rules.
Library for reducing tail latency in RAM reads
Gin is a high-performance HTTP web framework written in Go. It provides a Martini-like API but with significantly better performance—up to 40 times faster—thanks to httprouter. Gin is designed for …
Using BFS & DFS like techniques over LLMs for iteratively exploring the solution search space at scale.
Fast, accurate & comprehensive text measurement & layout
Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for commercial use. An alternative to Screen Studio.
The best-benchmarked open-source AI memory system. And it's free.
Anthropic-compatible HTTP facade over claude-agent-acp
Join Discord: https://discord.gg/5TUQKqFWd / claw-code Rust port parity work - it is temporary work while claw-code repo is doing migration
Lightweight, cross-platform process sandboxing powered by OpenAI Codex's runtime. Sandbox any command with file, network, and credential controls.
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
Vane is an AI-powered answering engine.