- Cambridge, MA
-
10:18
(UTC -04:00)
Highlights
Stars
HTTP routing and request-handling library for Rust that focuses on ergonomics and modularity
High-availability network proxy / VPN server, powered by WireGuard
Fast, accurate & comprehensive text measurement & layout
Fast sandboxed code execution via pre-warmed gVisor pools. Maintains pools of checkpoint-restored sentries with per-execution kernel isolation, cgroup limits, and automatic recycling.
Secure Node.js Execution Without a Sandbox A lightweight library for secure Node.js execution. No containers, no VMs — just npm-compatible sandboxing out of the box.
Sub-millisecond VM sandboxes for AI agents via copy-on-write forking
Deploy Claude agents as production APIs — with sessions, streaming, sandboxing, and persistence handled for you.
A Bulletproof Way to Generate Structured JSON from Language Models
Lightpanda: the headless browser designed for AI and automation
A Virtual Machine Monitor for modern Cloud workloads. Features include CPU, memory and device hotplug, support for running Windows and Linux guests, device offload with vhost-user and a minimal com…
nCPU: model-native and tensor-optimized CPU research runtimes with organized workloads, tools, and docs
Secure, Fast, and Extensible Sandbox runtime for AI agents.
Library for reading and writing large multi-dimensional arrays.
A simple L7 proxy for vLLM that manages LoRA adapter storage via NVMe, routes requests, and pins workloads to nodes
A curated list of top best AI Related Newsletters and ai agents newsletters
A Datacenter Scale Distributed Inference Serving Framework
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
Our library for RL environments + evals
Build userspace NVMe drivers and storage applications with CUDA support
High-efficiency LLM inference engine in C++/CUDA. Run Llama 70B on RTX 3090.
A high-throughput and memory-efficient inference and serving engine for LLMs
verl: Volcano Engine Reinforcement Learning for LLMs