Enabling PyTorch on XLA Devices (e.g. Google TPU)
-
Updated
Dec 10, 2025 - C++
Enabling PyTorch on XLA Devices (e.g. Google TPU)
JAX - A curated list of resources https://github.com/google/jax
The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I found useful while learning about the JAX ecosystem.
Elegant and Performant Deep Learning
GoMLX: An Accelerated Machine Learning Framework For Go
ALBERT model Pretraining and Fine Tuning using TF2.0
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
S + Autograd + XLA :: S-parameter based frequency domain circuit simulations and optimizations using JAX.
Julia on TPUs
Official scala pool repository
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
XLA integration of Open Neural Network Exchange (ONNX)
PyTorch distributed training acceleration framework
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
A fast transfer matrix method written in jax for modelling optical multilayer thin films
Add a description, image, and links to the xla topic page so that developers can more easily learn about it.
To associate your repository with the xla topic, visit your repo's landing page and select "manage topics."