-
The Hong Kong Polytechnic University
- Hong Kong
-
04:33
(UTC +08:00) - www4.comp.polyu.edu.hk/~csjwang/
Stars
The official gpt4free repository | various collection of powerful language models | o4, o3 and deepseek r1, gpt-4.1, gemini 2.5
A high-throughput and memory-efficient inference and serving engine for LLMs
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
No fortress, purely open ground. OpenManus is Coming.
Universal memory layer for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machi…
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Multi-agent framework, runtime and control plane. Built for speed, privacy, and scale.
Code and documentation to train Stanford's Alpaca models, and generate the data.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2024]
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
verl: Volcano Engine Reinforcement Learning for LLMs
Minimal reproduction of DeepSeek R1-Zero
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
A framework for few-shot evaluation of language models.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
A Python implementation of global optimization with gaussian processes.
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
A Collection of Variational Autoencoders (VAE) in PyTorch.