L1 regression with lewis weights subsampling
arXiv preprint arXiv:2105.09433, 2021•arxiv.org
We consider the problem of finding an approximate solution to $\ell_1 $ regression while
only observing a small number of labels. Given an $ n\times d $ unlabeled data matrix $ X $,
we must choose a small set of $ m\ll n $ rows to observe the labels of, then output an
estimate $\widehat {\beta} $ whose error on the original problem is within a $1+\varepsilon $
factor of optimal. We show that sampling from $ X $ according to its Lewis weights and
outputting the empirical minimizer succeeds with probability $1-\delta $ for $ m> O (\frac …
only observing a small number of labels. Given an $ n\times d $ unlabeled data matrix $ X $,
we must choose a small set of $ m\ll n $ rows to observe the labels of, then output an
estimate $\widehat {\beta} $ whose error on the original problem is within a $1+\varepsilon $
factor of optimal. We show that sampling from $ X $ according to its Lewis weights and
outputting the empirical minimizer succeeds with probability $1-\delta $ for $ m> O (\frac …
We consider the problem of finding an approximate solution to regression while only observing a small number of labels. Given an unlabeled data matrix , we must choose a small set of rows to observe the labels of, then output an estimate whose error on the original problem is within a factor of optimal. We show that sampling from according to its Lewis weights and outputting the empirical minimizer succeeds with probability for . This is analogous to the performance of sampling according to leverage scores for regression, but with exponentially better dependence on . We also give a corresponding lower bound of .
arxiv.org