#
Starred repositories
7
stars
written in C
Clear filter
Distribute and run LLMs with a single file.
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
llama and other large language models on iOS and MacOS offline using GGML library.
togethercomputer / redpajama.cpp
Forked from ggml-org/llama.cppExtend the original llama.cpp repo to support redpajama model.
Dart wrapper via dart:ffi for https://github.com/twain/twain-dsm
Bip-Rep / llama.cpp
Forked from ggml-org/llama.cppPort of Facebook's LLaMA model in C/C++