Starred repositories
A latent text-to-image diffusion model
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Jupyter notebooks for the code samples of the book "Deep Learning with Python"
本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。
High-Resolution Image Synthesis with Latent Diffusion Models
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Public facing notes page
Best Practices, code samples, and documentation for Computer Vision.
Using Low-rank adaptation to quickly fine-tune diffusion models.
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
COCO API - Dataset @ http://cocodataset.org/
Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101
The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.
Reference models and tools for Cloud TPUs.
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)
links to conference publications in graph-based deep learning
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Homepage for STAT 157 at UC Berkeley
Notes about courses Dive into Deep Learning by Mu Li
A small package to create visualizations of PyTorch execution graphs
DeepFashion2 Dataset https://arxiv.org/pdf/1901.07973.pdf
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoe…
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
Mathematical derivation and pure Python code implementation of machine learning algorithms.
Improving Convolutional Networks via Attention Transfer (ICLR 2017)
Efficient computing methods developed by Huawei Noah's Ark Lab