Results can be consulted on https://benchopt.github.io/results/benchmark_bilevel.html
BenchOpt is a package to simplify, to make more transparent, and reproducible the comparison of optimization algorithms. This benchmark is dedicated to solvers for bilevel optimization:
where
This benchmark implements three bilevel optimization problems: quadratic problem, regularization selection, and data cleaning.
In this problem, the inner and the outer functions are quadratic functions defined on
and
where
The matrices mu_inner
, and L_inner_inner
, the eigenvalues of mu_inner
, and L_inner_outer
, the eigenvalues of mu_inner
, and L_outer_inner
, and the eigenvalues of mu_inner
, and L_outer_outer
.
The matrices L_cross_inner
, and the spectral norm of L_cross_outer
.
Note that in this setting, the solution of the inner problem is a linear system. As the full batch inner and outer functions can be computed efficiently with the average Hessian matrices, the value function is evaluated in closed form.
In this problem, the inner function
where
The outer function
where the
There are currently two datasets for this regularization selection problem.
Covtype - Homepage
This is a logistic regression problem, where the data have the form
Ijcnn1 - Homepage
This is a multiclass logistic regression problem, where the data is of the form
This problem was first introduced by Franceschi et al. (2017).
In this problem, the data is the MNIST dataset.
The training set has been corrupted: with a probability
where the
where the
This benchmark can be run using the following commands:
$ pip install -U benchopt
$ git clone https://github.com/benchopt/benchmark_bilevel
$ benchopt run benchmark_bilevel
Apart from the problem, options can be passed to benchopt run
to restrict the benchmarks to some solvers or datasets, e.g.:
$ benchopt run benchmark_bilevel -s solver1 -d dataset2 --max-runs 10 --n-repetitions 10
You can also use config files to set the benchmark run:
$ benchopt run benchmark_bilevel --config config/X.yml
where X.yml
is a config file. See https://benchopt.github.io/index.html#run-a-benchmark for an example of a config file. This will launch a huge grid search. When available, you can rather use the file X_best_params.yml
to launch an experiment with a single set of parameters for each solver.
Use benchopt run -h
for more details about these options, or visit https://benchopt.github.io/api.html.
If you want to add a solver or a new problem, you are welcome to open an issue or submit a pull request!
Each solver derives from the benchopt.BaseSolver
class in the solvers folder. The solvers are separated among the stochastic JAX solvers and the others:
- Stochastic Jax solver: these solvers inherit from the
StochasticJaxSolver
class see the detailed explanations in the template stochastic solver. - Other solver: see the detailed explanation in the Benchopt documentation. An example is provided in the template solver.
In this benchmark, each problem is defined by a Dataset class in the datasets folder. A template is provided.
If you use this benchmark in your research project, please cite the following paper:
@inproceedings{dagreou2022,
title = {A Framework for Bilevel Optimization That Enables Stochastic and Global Variance Reduction Algorithms},
booktitle = {Advances in {{Neural Information Processing Systems}} ({{NeurIPS}})},
author = {Dagr{\'e}ou, Mathieu and Ablin, Pierre and Vaiter, Samuel and Moreau, Thomas},
year = {2022}
}