-
Xidian Univ.
- Xi'An, China
- https://njuhugn.github.io/
- @StudentGu
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 The platform for reliable agents.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Models and examples built with TensorFlow
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
A high-throughput and memory-efficient inference and serving engine for LLMs
scikit-learn: machine learning in Python
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
💫 Industrial-strength Natural Language Processing (NLP) in Python
OpenMMLab Detection Toolbox and Benchmark
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Code and documentation to train Stanford's Alpaca models, and generate the data.
The official Python library for the OpenAI API
State-of-the-art 2D and 3D Face Analysis Project
Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Code for the paper "Language Models are Unsupervised Multitask Learners"
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
SGLang is a fast serving framework for large language models and vision language models.
Fast and memory-efficient exact attention
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Magenta: Music and Art Generation with Machine Intelligence