Lists (2)
Sort Name ascending (A-Z)
Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
A high-throughput and memory-efficient inference and serving engine for LLMs
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
CLI platform to experiment with codegen. Precursor to: https://lovable.dev
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
OpenMMLab Detection Toolbox and Benchmark
Generative Models by Stability AI
SGLang is a high-performance serving framework for large language models and multimodal models.
An open-source RAG-based tool for chatting with your documents.
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Ongoing research training transformer models at scale
An open source implementation of CLIP.
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Llava, Phi4, ...)…
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Reverse Engineering: Decompiling Binary Code with Large Language Models
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.