NetKet is an open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and machine learning techniques. It is a Python library built on C++ primitives.
- Homepage: https://netket.org
- Citing: https://www.netket.org/citing
- Documentation: https://netket.org/documentation
- Tutorials: https://www.netket.org/tutorials
- Examples: https://github.com/netket/netket/tree/master/Examples
- Source code: https://github.com/netket/netket
Netket supports MacOS and Linux. The reccomended way to install it in a non-conda python environment is:
pip install netket[all]
The [all]
after netket will install all extra dependencies of netket.
We reccomend to install netket with all it's extra dependencies, which are documented below.
However, if you do not have a working MPI compiler in your PATH this installation will most likely fail because
it will attempt to install mpi4py
, which enables MPI support in netket.
If you are only starting to discover netket and won't be running extended simulations, you can forego MPI by
installing netket with the command
pip install netket # or pip install netket[jax]
Netket is also available on conda-forge. To install netket in a conda-environment you can use:
conda install conda-forge::netket
Conda by default ships pre-built binaries for recent versions of python.
The default blas library is openblas, but mkl can be enforced.
The conda library is linked to anaconda's mpi4py
, therefore we do not reccomend to use this installation
method on computer clusters with a custom MPI distribution.
When installing netket with pip, you can pass the following extra variants as square brakets. You can install several of them by separating them with a comma.
- '[dev]': installs development-related dependencies such as black, pytest and testing dependencies
- '[mpi]': Installs
mpi4py
to enable multi-process parallelism. Requires a working MPI compiler in your path - '[jax]': Installs
jax
to enable jax-based neural networks - '[all]': Installs
mpi
,jax
andmpi4jax
which is required to use mpi with jax machines.
Since version 3, in addition to the built-in machines, you can also use Jax and PyTorch to define your custom neural networks.
Depending on the library you use to define your machines, distributed computing through MPI might or might not be supported. Please see below:
- netket : distributed computing through MPI support can be enabled by installing the package
mpi4py
through pip or conda. - jax : distributed computing through MPI is supported natively only if you don't use Stochastic Reconfiguration (SR). If you need SR, you must install mpi4jax. Please note that we advise to install mpi4jax with the same tool (conda or pip) with which you installed netket.
- pytorch : distributed computing through MPI is enabled if the package
mpi4py
is isntalled. Stochastic Reconfiguration (SR) cannot be used when MPI is enabled.
To check whever MPI support is enabled, check the flags
# For standard MPI support
>>> netket.utils.mpi_available
True
# For faster MPI support with jax and to enable SR + MPI with Jax machines
>>> netket.utils.mpi4jax_available
True
-
Graphs
- Built-in Graphs
- Hypercube
- General Lattice with arbitrary number of atoms per unit cell
- Custom Graphs
- Any Graph With Given Adjacency Matrix
- Any Graph With Given Edges
- Symmetries
- Automorphisms: pre-computed in built-in graphs, available through iGraph for custom graphs
- Built-in Graphs
-
Quantum Operators
- Built-in Hamiltonians
- Transverse-field Ising
- Heisenberg
- Bose-Hubbard
- Custom Operators
- Any k-local Hamiltonian
- General k-local Operator defined on Graphs
- Built-in Hamiltonians
-
Variational Monte Carlo
- Stochastic Learning Methods for Ground-State Problems
- Gradient Descent
- Stochastic Reconfiguration Method
- Direct Solver
- Iterative Solver for Large Number of Parameters
- Stochastic Learning Methods for Ground-State Problems
-
Exact Diagonalization
- Full Solver
- Lanczos Solver
- Imaginary-Time Dynamics
-
Supervised Learning
- Supervised overlap optimization from given data
-
Neural-Network Quantum State Tomography
- Using arbitrary k-local measurement basis
-
Optimizers
- Stochastic Gradient Descent
- AdaMax, AdaDelta, AdaGrad, AMSGrad
- RMSProp
- Momentum
-
Machines
- Restricted Boltzmann Machines
- Standard
- For Custom Local Hilbert Spaces
- With Permutation Symmetry Using Graph Isomorphisms
- Feed-Forward Networks
- For Custom Local Hilbert Spaces
- Fully connected layer
- Convnet layer for arbitrary underlying graph
- Any Layer Satisfying Prototypes in
AbstractLayer
[extending C++ code]
- Jastrow States
- Standard
- With Permutation Symmetry Using Graph Isomorphisms
- Matrix Product States
- MPS
- Periodic MPS
- Custom Machines
- Any Machine Satisfying Prototypes in
AbstractMachine
[extending C++ code]
- Any Machine Satisfying Prototypes in
- Restricted Boltzmann Machines
-
Observables
- Custom Observables
- Any k-local Operator
- Custom Observables
-
Sampling
- Local Metropolis Moves
- Local Hilbert Space Sampling
- Hamiltonian Moves
- Automatic Moves with Hamiltonian Symmetry
- Custom Sampling
- Any k-local Stochastic Operator can be used to do Metropolis Sampling
- Exact Sampler for small systems
- Local Metropolis Moves
-
Statistics
- Automatic Estimate of Correlation Times
-
Interface
- Python Library
- JSON output