- Regensburg, Germany
-
00:27
(UTC +02:00)
Highlights
- Pro
Lists (32)
Sort Name ascending (A-Z)
AI
Android
Apple
Architecture
Audio
Big Data
CI/CD
Dev Tools
🔮 Future ideas
Health
IoT
Keyboards
Kubernetes
Meditation
Network
NLP
PKM
Productivity
Programming Languages
Python
Resume / CV
Rust
Scala
Security
Smart Home
Spark
SQL
Testing
Toniebox
UI
Web3
Windows
- All languages
- ABAP
- ANTLR
- Agda
- Assembly
- AutoHotkey
- Batchfile
- Blade
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Coq
- Cuda
- Dart
- Dhall
- Dockerfile
- Elm
- Emacs Lisp
- F#
- F*
- Flix
- Frege
- Gherkin
- Go
- Groovy
- HCL
- HTML
- Haskell
- Idris
- Java
- JavaScript
- Jupyter Notebook
- Just
- Kotlin
- LLVM
- Lean
- Lua
- MATLAB
- MDX
- Makefile
- Markdown
- Mojo
- Monkey C
- Nim
- Nix
- Nushell
- OCaml
- Objective-C
- OpenSCAD
- PHP
- PLpgSQL
- Pascal
- Perl
- PowerShell
- Processing
- Python
- QML
- R
- Racket
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Starlark
- Svelte
- Swift
- TeX
- Terra
- Tree-sitter Query
- TypeScript
- Typst
- V
- VBScript
- Vala
- Vim Script
- Visual Basic .NET
- Vue
- WebAssembly
- XSLT
- Yacc
- Zig
Starred repositories
from vibe coding to agentic engineering - practice makes claude perfect
vLLM Metal plugin powered by mlx-swift — high-performance LLM inference on Apple Silicon
35 domain-expert LoRAs on Qwen3.6-35B-A3B (MoE, 256 experts, 3B active). Cognitive layer: Aeon memory, CAMP negotiator, KnowBias. MLX on Mac Studio, Q4_K_M inference. Apache-2.0.
WhatsApp in pure Rust. Single binary, 5 MB, 15 MB RAM. 54 API endpoints, 30 MCP tools, full protocol.
AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods
Orchestrate an entire AI dev team on as little as 5GB VRAM. An AI coding agent built like a systems engineer. Ephemeral context, zero token bloat, exact-match diffs. Stop wasting money on 10k token…
Open-source platform for creating, distributing and running sovereign EU-compliant LLMs. Verticalize any model for your domain, language and brand. AI Act ready.
Tree-based speculative decoding for Apple Silicon (MLX). ~10-15% faster than DFlash on code, ~1.5x over autoregressive. First MLX port with custom Metal kernels for hybrid model support.
Lossless DFlash speculative decoding for MLX on Apple Silicon
MCP server for interacting with the iOS simulator
ekryski / mlx-swift-lm
Forked from ml-explore/mlx-swift-lmLLMs and VLMs with MLX Swift
Apple Silicon (MLX) port of Karpathy's autoresearch — autonomous AI research loops on Mac, no PyTorch required.
Native macOS App Store Connect tool with MCP. Submit iOS apps to App Store with AI agents
Ultra-Sparse Adaptation of 1-Bit LLMs via XOR Patches
VS Code rebuilt on Tauri. Same architecture, 96% smaller. Early release.
Docker-compatible container CLI built on Apple's Containerization framework. Same commands, same flags — mocker run, ps, stop, build, compose, stats — all working on macOS 26.
vMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth
Powdered Metal — High performance LLM fine-tuning framework for Apple Silicon
Apple Neural Engine (ANE) LLM inference engine — reverse-engineered private APIs, Metal GPU shaders, hybrid ANE+GPU+CPU on Apple Silicon. 32 tok/s matching llama.cpp, 3.6 TFLOPS fused ANE mega-kern…
Train and run transformers directly on Apple's Neural Engine in Swift bypass coreml entirely
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
HuggingFace transformer compiler for optimised native inference binaries
Multi-agent orchestration for Claude Code. Persistent memory, tasks, rules, and skills that make AI agents actually coordinate.
AI agents running research on single-GPU nanochat training automatically
Beautiful UIs in Rust. Cross-platform. Dead simple.