import Pkg
Pkg.add("Lux")Tip
To use Lux online, use Google Colab. The Julia Runtime comes pre-installed with Lux and Reactant!
| Packages | Stable Version | Monthly Downloads | Total Downloads | Build Status |
|---|---|---|---|---|
| π¦ Lux.jl | ||||
| β π¦ LuxLib.jl | ||||
| β π¦ LuxCore.jl | ||||
| β π¦ MLDataDevices.jl | ||||
| β π¦ WeightInitializers.jl | ||||
| β π¦ LuxTestUtils.jl | ||||
| β π¦ LuxCUDA.jl |
Currently Benchmarks are scatter across a few places:
- For comparison with other Julia packages like CUDA.jl take a look at Lux.jl/perf.
- https://enzymead.github.io/Enzyme-JAX/benchmarks/ highlights performance of EnzymeJAX (backend for Reactant.jl) against JAX.
- https://enzymead.github.io/Reactant.jl/benchmarks/ highlights performance of Reactant.jl against default XLA and base Julia compilation.
using Lux, Random, Optimisers, Reactant, Enzyme
rng = Random.default_rng()
Random.seed!(rng, 0)
model = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))
dev = reactant_device()
ps, st = Lux.setup(rng, model) |> dev
x = rand(rng, Float32, 128, 2) |> dev
# We need to compile the model before we can use it.
model_forward = @compile model(x, ps, Lux.testmode(st))
model_forward(x, ps, Lux.testmode(st))
# Gradients can be computed using Enzyme
@jit Enzyme.gradient(Reverse, sum β first β Lux.apply, Const(model), x, ps, Const(st))
# All of this can be automated using the TrainState API
train_state = Training.TrainState(model, ps, st, Adam(0.001f0))
gs, loss, stats, train_state = Training.single_train_step!(
AutoEnzyme(), MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))), train_state
)using Lux, Random, Optimisers, Zygote
# using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support
# Seeding
rng = Random.default_rng()
Random.seed!(rng, 0)
# Construct the layer
model = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))
# Get the device determined by Lux
dev = gpu_device()
# Parameter and State Variables
ps, st = Lux.setup(rng, model) |> dev
# Dummy Input
x = rand(rng, Float32, 128, 2) |> dev
# Run the model
y, st = Lux.apply(model, x, ps, st)
# Gradients
## First construct a TrainState
train_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))
## We can compute the gradients using Training.compute_gradients
gs, loss, stats, train_state = Lux.Training.compute_gradients(AutoZygote(), MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))), train_state)
## Optimization
train_state = Training.apply_gradients!(train_state, gs) # or Training.apply_gradients (no `!` at the end)
# Both these steps can be combined into a single call
gs, loss, stats, train_state = Training.single_train_step!(AutoZygote(), MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))), train_state)Look in the examples directory for self-contained usage examples. The documentation has examples sorted into proper categories.
For usage related questions, please use Github Discussions which allows questions and answers to be indexed. To report bugs use github issues or even better send in a pull request.
If you found this library to be useful in academic work, then please cite:
@software{pal2023lux,
author = {Pal, Avik},
title = {{Lux: Explicit Parameterization of Deep Neural Networks in Julia}},
month = apr,
year = 2023,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {v1.4.2},
doi = {10.5281/zenodo.7808903},
url = {https://doi.org/10.5281/zenodo.7808903},
swhid = {swh:1:dir:1a304ec3243961314a1cc7c1481a31c4386c4a34;origin=https://doi.org/10.5281/zenodo.7808903;visit=swh:1:snp:e2bbe43b14bde47c4ddf7e637eb7fc7bd10db8c7;anchor=swh:1:rel:2c0c0ff927e7bfe8fc8bc43fd553ab392a6eb403;path=/}
}
@thesis{pal2023efficient,
title = {{On Efficient Training \& Inference of Neural Differential Equations}},
author = {Pal, Avik},
year = {2023},
school = {Massachusetts Institute of Technology}
}Also consider starring our github repo.
This section is somewhat incomplete. You can contribute by contributing to finishing this section π.
Note
Pin JuliaFormatter to v1 until upstream issues with v2 are resolved.
using JuliaFormatter
format(".")The full test of Lux.jl takes a long time, here's how to test a portion of the code.
Tests are organized by directories, where each directory contains test files with @testset
blocks. For example, tests for SkipConnection are in test/core_layers/containers_tests.jl.
The easiest way to run a specific test is to directly activate the test directory and include the test file:
# From the Lux.jl root directory
using Pkg
Pkg.activate("test")
# Run a specific test file
include("test/core_layers/containers_tests.jl")This approach allows you to quickly iterate on specific tests without running the entire test suite.
See ParallelTestRunners.jl for details on executing specific groups of tests.
To run a specific group of tests via the test runner, you can pass the directory name as a positional argument:
julia --project -e 'using Pkg; Pkg.test(test_args=["core_layers"])'To run the full test suite:
julia --project -e 'using Pkg; Pkg.test()'Lux builds a bunch of tutorials as part of its documentation. This can be time-consuming and
requires a lot of compute. To speed up the build, you can set the
LUX_DOCS_DRAFT_BUILD=true.
LUX_DOCS_DRAFT_BUILD=true julia --threads=auto --startup=no --project=docs docs/make.jlWhen writing tutorials (anything under examples/), include the tutorial in
docs/tutorials.jl. If the tutorial is time-consuming, set should_run to false.
Additionally for a new page to be included in the navigation and sidebar, these need to be
added to docs/src/.vitepress/config.mts. Specifically these need to be added under
sidebar and/or nav based on the type of page.
To use LiveServer to preview the docs locally, checkout DocumenterVitepress.jl documentation.