Lists (1)
Sort Name ascending (A-Z)
Stars
A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.
Source code and demo for memory bank and SiliconFriend
Open-source implementation of AlphaEvolve
intel / sycl-tla
Forked from NVIDIA/cutlassSYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs
🎨ComfyUI standalone pack with 40+ custom nodes. | ComfyUI 大号整合包,预装大量自定义节点(不含SD模型)
12 Lessons to Get Started Building AI Agents
Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.
🤗 smolagents: a barebones library for agents that think in code.
Library for building powerful interactive command line applications in Python
Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.
A high-throughput and memory-efficient inference and serving engine for LLMs
这是一个简单的技术科普教程项目,主要聚焦于解释一些有趣的,前沿的技术概念和原理。每篇文章都力求在 5 分钟内阅读完成。
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Machine Learning Engineering Open Book
SGLang is a fast serving framework for large language models and vision language models.
PyTorch native quantization and sparsity for training and inference
Markdown语法支持添加 emoji表情,输入不同的符号码(两个冒号包围的字符)可以显示出不同的表情
A markdown version emoji cheat sheet
Datasets, Transforms and Models specific to Computer Vision
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver