Stars
4
stars
written in C++
Clear filter
Empower the Web community and invite more to build across platforms.
Kernels & AI inference engine for mobile devices.
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.