Stars
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
A machine learning compiler for GPUs, CPUs, and ML accelerators
Flax is a neural network library for JAX that is designed for flexibility.
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Differentiable, Hardware Accelerated, Molecular Dynamics
Hardware accelerated, batchable and differentiable optimizers in JAX.
Research language for array processing in the Haskell/ML family
A playbook for systematically maximizing the performance of deep learning models.
georgedahl / flax
Forked from google/flaxFlax is a neural network library for JAX that is designed for flexibility.
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf
georgedahl / jax
Forked from jax-ml/jaxComposable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more