-
Microsoft Research
- China
- https://addf400.github.io/
- @HangboBao
Lists (3)
Sort Name ascending (A-Z)
Stars
All Algorithms implemented in Python
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Magnificent app which corrects your previous console command.
Robust Speech Recognition via Large-Scale Weak Supervision
Models and examples built with TensorFlow
Scrapy, a fast high-level web crawling & scraping framework for Python.
Making large AI models cheaper, faster and more accessible
TensorFlow code and pre-trained models for BERT
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
PyTorch Tutorial for Deep Learning Researchers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
Official inference framework for 1-bit LLMs
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Code for the paper "Language Models are Unsupervised Multitask Learners"
Graph Neural Network Library for PyTorch
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Fast and memory-efficient exact attention
Magenta: Music and Art Generation with Machine Intelligence