Lists (18)
Sort Name ascending (A-Z)
🔥 Agent & MCP
✊ AI & LLMOps
about all ai and llm projects.🚀 CloudNatvie
Kubernetes,Istio,Mircoservice,AnyOps🧑🏻💻 Develop
🧑🏻💻 Develop.rust
📖 easy-website
about docs🧑🏻💻 golang pkg
🌟 Help This Grow
🎱 n8n.workflow
Networks
🍚 New for Golang
🔥 Python
🤓 Rust
SD Infra & Tools
Share AI apss
show all coding with ai🎉 Sharing Awsome
About funny project with my life🌟 Star-dao follow ME
Open source projects contributed by daocloudX-Claw & Lobster
- All languages
- ActionScript
- Astro
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- CoffeeScript
- Cuda
- Dart
- Dockerfile
- Elixir
- Go
- Go Template
- Groovy
- HTML
- Java
- JavaScript
- Jinja
- Jupyter Notebook
- Kotlin
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Mojo
- Mustache
- Nu
- Nushell
- Objective-C
- PHP
- Python
- QMake
- Roff
- Ruby
- Rust
- SCSS
- Sass
- Scala
- Scheme
- Shell
- Smarty
- Svelte
- Swift
- TeX
- TypeScript
- Vim Script
- Vue
- reStructuredText
Starred repositories
An Open Source Machine Learning Framework for Everyone
Build cross-platform desktop apps with JavaScript, HTML, and CSS
Port of OpenAI's Whisper model in C/C++
Cloud-native high-performance edge/middle/service proxy
Distribute and run LLMs with a single file.
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
Utility to convert between various subscription format
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
A fast multi-producer, multi-consumer lock-free concurrent queue for C++11
Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
A high-performance distributed file system designed to address the challenges of AI training and inference workloads.
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
High-speed Large Language Model Serving for Local Deployment
SEX IS ZERO (0), so, who wanna be the ONE (1), aha?
Multipass orchestrates virtual Ubuntu instances
Hippy is designed to easily build cross-platform dynamic apps. 👏
Instant Kubernetes-Native Application Observability
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tps,多并发可达60+。
Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk
ByConity is an open source cloud data warehouse
⚡ Fastest SQL ETL pipeline in a single C++ binary, built for stream processing, observability, analytics and AI/ML
GPGPU-Sim provides a detailed simulation model of contemporary NVIDIA GPUs running CUDA and/or OpenCL workloads. It includes support for features such as TensorCores and CUDA Dynamic Parallelism as…
A high-performance file client for mounting an OSS bucket as a local filesystem.