Starred repositories
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Command-line program to download videos from YouTube.com and other video sites
openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system on 300+ supported cars.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Making large AI models cheaper, faster and more accessible
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A toolkit for developing and comparing reinforcement learning algorithms.
Qlib is an AI-oriented Quant investment platform that aims to use AI tech to empower Quant Research, from exploring ideas to implementing productions. Qlib supports diverse ML modeling paradigms, i…
A complete and graceful API for Wechat. 微信个人号接口、微信机器人及命令行微信,三十行即可自定义个人号机器人。
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
PyTorch implementations of Generative Adversarial Networks.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale.
Python Implementation of Reinforcement Learning: An Introduction
AKShare is an elegant and simple financial data interface library for Python, built for human beings! 开源财经数据接口库
TuShare is a utility for crawling historical data of China stocks
An educational resource to help anyone learn deep reinforcement learning.
A PyTorch implementation of the Transformer model in "Attention is All You Need".
OctoPrint is the snappy web interface for your 3D printer!
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .
Chinese version of GPT2 training code, using BERT tokenizer.
Model parallel transformers in JAX and Haiku
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.