Fast, scalable, and flexible optimization for machine learning
ReHLine is a powerful solver for large-scale empirical risk minimization (ERM) problems with convex piecewise linear-quadratic (PLQ) loss functions and linear constraints. Whether you're training SVMs, quantile regression models, or Huber-regularized estimators, ReHLine delivers exceptional performance with provable convergence guarantees.
ReHLine excels at solving a wide range of machine learning problems, following are some examples:
| Problem | Description | Applications |
|---|---|---|
| Support Vector Machines (SVM) | Binary and multi-class classification with hinge loss | Text classification, image recognition |
| Fair SVM | SVM with fairness constraints for equitable predictions | Bias-aware hiring, lending decisions |
| Quantile Regression | Estimate conditional quantiles with check loss | Risk assessment, forecasting |
| Huber Regression | Robust regression resistant to outliers | Noisy data analysis, anomaly detection |
| Elastic Net | Combined L1/L2 regularization for feature selection | High-dimensional genomics, finance |
π‘ Beyond Pre-built Losses and Constraints: ReHLine is a general-purpose solver that can optimize any convex piecewise linear-quadratic (PLQ) loss with arbitrary linear constraints. The examples above are just common use casesβyou can define custom loss functions and constraints for your specific problem. See detailed documentation for each interface: Python | R | C++
- π Blazing Fast: Linear computational complexity per iteration scales efficiently to millions of samples
- π― Versatile: Supports any convex piecewise linear-quadratic loss (hinge, check, Huber, and more)
- π Constrained Optimization: Handle linear equality and inequality constraints with ease
- π Scikit-Learn Compatible: Drop-in replacement for standard ML workflows. Seamlessly integrates with
GridSearchCV,Pipeline, and all scikit-learn tools
ReHLine provides native implementations across multiple platforms:
| Interface | Repository | Installation |
|---|---|---|
| π Python | ReHLine-python | pip install rehline |
| π R | ReHLine-r | install.packages("rehline") |
| β‘ C++ | ReHLine-cpp | Build from source |
All interfaces leverage the same highly optimized C++ core for maximum performance.
## Example: SVM Classification with fairness constraint
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from rehline import plq_Ridge_Classifier
# generate the dataset
X, y = make_classification(n_samples=2000, n_classes=2)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, stratify=y, random_state=42)
fclf = plq_Ridge_Classifier(C=1.0,
loss={'name': 'svm'},
constraint=[{'name': 'fair',
'sen_idx': [0], ## column index of sensitive feature
'tol_sen': 0.1, ## tolerance for fairness constraint
}],
max_iter=50000)
fclf.fit(X, y)
# Make predictions
y_pred = fclf.predict(X)ReHLine delivers exceptional speed compared to state-of-the-art solvers. Here are speed-up factors on real-world datasets:
| Task | vs. ECOS | vs. MOSEK | vs. SCS | vs. Specialized Solvers |
|---|---|---|---|---|
| SVM | 415Γ faster | β (failed) | 340Γ faster | 4.5Γ vs. LIBLINEAR |
| Fair SVM | 273Γ faster | 100Γ faster | 252Γ faster | β vs. DCCP (failed) |
| Quantile Regression | 2843Γ faster | β (failed) | β (failed) | β |
| Huber Regression | β (failed) | 452Γ faster | β (failed) | 2.4Γ vs. hqreg |
| Smoothed SVM | β | β | β | 1.6-2.3Γ vs. SAGA/SAG/SDCA/SVRG |
Note: "β" indicates the competing solver failed to produce a valid solution or exceeded time limits. Results from NeurIPS 2023 paper.
All benchmarks are reproducible via benchopt at our ReHLine-benchmark repository.
| Problem | Benchmark Code | Interactive Results |
|---|---|---|
| SVM | Code | π View |
| Smoothed SVM | Code | π View |
| Fair SVM | Code | π View |
| Quantile Regression | Code | π View |
| Huber Regression | Code | π View |
If you use ReHLine in your research, please cite our NeurIPS 2023 paper:
@inproceedings{dai2023rehline,
title={ReHLine: Regularized Composite ReLU-ReHU Loss Minimization with Linear Computation and Linear Convergence},
author={Dai, Ben and Qiu, Yixuan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023}
}
|
|