Stars
A latent text-to-image diffusion model
Google Research
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 instead.
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
The "Python Machine Learning (1st edition)" book code repository and info resource
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
Public facing notes page
Free online textbook of Jupyter notebooks for fast.ai Computational Linear Algebra course
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
TensorFlow Tutorials with YouTube Videos
Image restoration with neural networks but without learning.
Efficient Image Captioning code in Torch, runs on GPU
"Probabilistic Machine Learning" - a book series by Kevin Murphy
Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity
Acceptance rates for the major AI conferences
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Single Shot MultiBox Detector in TensorFlow
Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters
InferSent sentence embeddings
Demo of running NNs across different frameworks
Dense image captioning in Torch
Tutorials and implementations for "Self-normalizing networks"
A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
Resources of semantic segmantation based on Deep Learning model
Tensorflow code for the Bayesian GAN (https://arxiv.org/abs/1705.09558) (NIPS 2017)
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).
Two time-scale update rule for training GANs
Training Very Deep Neural Networks Without Skip-Connections