[edit]
Compressed Sensing with Very Sparse Gaussian Random Projections
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:617-625, 2015.
Abstract
\noindent We study the use of \em very sparse random projections \citeProc:Li_Hastie_Church_KDD06,Proc:Li_KDD07 for compressed sensing (sparse signal recovery) when the nonzero coordinates of signals can be either positive or negative. In our setting, the entries of a Gaussian design matrix are randomly sparsified so that only a very small fraction of entries are nonzero. Our proposed decoding algorithm is simple and efficient in that the major cost is one linear scan of the coordinates. Using our proposed \textbf\em “tie estimator”, we are able to recover a K-sparse signal of length N using 1.551 eK \log K/δmeasurements (where δ≤0.05 is the confidence) in one scan. The practical performance of our method, however, can be substantially better than this bound. The Gaussian design assumption is not essential although it simplifies the analysis. \noindent Prior studies have shown that existing one-scan (or roughly one-scan) recovery algorithms using sparse matrices would require substantially (e.g., one order of magnitude) more measurements than L1 decoding by linear programming, when the nonzero coordinates of signals can be either negative or positive. In this paper, following a well-known experimental setup \citeUrl:wiki_sparse, we show that, at the same number of measurements, the recovery accuracies of our proposed method are similar to the standard L1 decoding.