- Anyang, Korea
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
An Open Source Machine Learning Framework for Everyone
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation.
SGLang is a fast serving framework for large language models and vision language models.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
A high-throughput and memory-efficient inference and serving engine for LLMs
An extremely fast Python package and project manager, written in Rust.
You like pytorch? You like micrograd? You love tinygrad! ❤️
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Extremely fast Query Engine for DataFrames, written in Rust
OpenAI Triton backend for Intel® GPUs
Open-source simulator for autonomous driving research.
WPF UI provides the Fluent experience in your known and loved WPF framework. Intuitive design, themes, navigation and new immersive controls. All natively and effortlessly.
This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.
Development repository for the Triton language and compiler
Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
A framework for building native Windows apps with React.
Kortix – build, manage and train AI Agents. Fully Open Source.
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator suppor…
Supercharge Your LLM with the Fastest KV Cache Layer
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
Train transformer language models with reinforcement learning.
mickqian / sglang
Forked from sgl-project/sglangSGLang is a fast serving framework for large language models and vision language models.
High-performance automatic differentiation of LLVM and MLIR.