Buildin'
-
llama.cpp-gfx906 Public
Forked from ggml-org/llama.cppLLM inference in C/C++, but for GFX906
C++ MIT License UpdatedNov 25, 2025 -
wiki-gfx906 Public
A database of knowledge around inference & training on GFX906 GPUs https://skyne98.github.io/wiki-gfx906/
2 UpdatedNov 11, 2025 -
-
-
-
-
-
-
vllm Public
Forked from vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs
Python Apache License 2.0 UpdatedMar 29, 2025 -
cranefuck Public
optimizing brainfuck compiler
-
-
candle Public
Forked from huggingface/candleMinimalist ML framework for Rust
-
-
-
-
mistral.rs Public
Forked from EricLBuehler/mistral.rsBlazingly fast LLM inference.
Rust MIT License UpdatedNov 17, 2024 -
-
-
-
-
-
-
soap Public
Simple GOAP library with extensible state and requirements
-
-
-
-
-
-
-
Previous Next