Stars
1
result
for sponsorable starred repositories
written in C++
Clear filter
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.