llama3
Here are 12 public repositories matching this topic...
llama.cpp 🦙 LLM inference in TypeScript
-
Updated
Sep 26, 2024 - C++
Explore LLM model deployment based on AXera's AI chips
-
Updated
Nov 12, 2025 - C++
Run generative AI models in sophgo BM1684X/BM1688
-
Updated
Nov 11, 2025 - C++
校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。
-
Updated
Oct 28, 2025 - C++
A high-performance inference system for large language models, designed for production environments.
-
Updated
Nov 6, 2025 - C++
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
-
Updated
Nov 2, 2025 - C++
Improve this page
Add a description, image, and links to the llama3 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama3 topic, visit your repo's landing page and select "manage topics."