- Germany
-
23:25
(UTC +01:00)
Stars
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
🚀2.3x faster than MinIO for 4KB object payloads. RustFS is an open-source, S3-compatible high-performance object storage system supporting migration and coexistence with other S3-compatible platfor…
GitNexus: The Zero-Server Code Intelligence Engine - GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive …
State-of-the-Art Text Embeddings
Privacy first, AI meeting assistant with 4x faster Parakeet/Whisper live transcription, speaker diarization, and Ollama summarization built on Rust. 100% local processing. no cloud required. Meetil…
ripgrep recursively searches directories for a regex pattern while respecting your gitignore
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
incubator repo for CUDA-TileIR backend
🚀 The fast, Pythonic way to build MCP servers and clients.
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
#1 PDF Application on GitHub that lets you edit PDFs on any device anywhere
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
ArcticInference: vLLM plugin for high-throughput, low-latency inference
CUDA Python: Performance meets Productivity
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Firmware replacement for Growatt ShineWiFi-S
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr…
A generative world for general-purpose robotics & embodied AI learning.
An extremely fast Python package and project manager, written in Rust.
Get your documents ready for gen AI
PyTorch native quantization and sparsity for training and inference
Efficient Triton Kernels for LLM Training
Machine Learning Engineering Open Book
FlashInfer: Kernel Library for LLM Serving