0% found this document useful (0 votes)
47 views4 pages

Sparse Detection via Multipath Matching

This document summarizes a research paper that proposes modifying multipath matching pursuit (MMP), a parallel greedy search algorithm, to effectively recover discrete and sparse input signals from an underdetermined system with integer constraints. It begins by introducing the system model and challenges of conventional detection methods. It then describes how sparse signal recovery algorithms can better handle this problem by converting it to an overdetermined system. Specifically, it discusses l1-minimization techniques and greedy pursuit algorithms like orthogonal matching pursuit (OMP) and MMP, the latter of which it modifies to incorporate integer constraints.

Uploaded by

Fahd Saif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views4 pages

Sparse Detection via Multipath Matching

This document summarizes a research paper that proposes modifying multipath matching pursuit (MMP), a parallel greedy search algorithm, to effectively recover discrete and sparse input signals from an underdetermined system with integer constraints. It begins by introducing the system model and challenges of conventional detection methods. It then describes how sparse signal recovery algorithms can better handle this problem by converting it to an overdetermined system. Specifically, it discusses l1-minimization techniques and greedy pursuit algorithms like orthogonal matching pursuit (OMP) and MMP, the latter of which it modifies to incorporate integer constraints.

Uploaded by

Fahd Saif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1

Sparse Detection with Integer Constraint Using


Multipath Matching Pursuit
Byonghyo Shim, Suhyuk Kwon, Byungkwen Song

Abstract—In this paper, we consider a detection problem of the where QΩ (·) is the integer quantizer (a.k.a slicing function)
underdetermined system when the input vector is sparse and its which maps the input to the closest value in Ω (i.e., QΩ (z) =
elements are chosen from a set of finite alphabets. This scenario arg minω∈Ω ∥z − ω∥2 ). Since the system is underdetermined,
is popular and embraces many of current and future wireless
communication systems. We show that a simple modification of these approaches, which attempt to find a solution by in-
multipath matching pursuit (MMP), recently proposed parallel verting the entire covariance matrix, do not perform well
greedy search algorithm, is effective in recovering the discrete in general. Due to the fact that nonzero elements of the
and sparse input signals. We also show that the addition of cross signal vector are chosen from the set of finite integers, one
validation (CV) to the MMP algorithm is effective in identifying can alternatively consider the detection strategy (e.g., sphere
the sparsity level of input vector.
decoding for maximum likelihood detection [1], [2]). Although
Index Terms—Compressed sensing, sparse signal recovery, it is desirable to exploit the discrete property of transmit
greedy algorithms, multipath matching pursuit (MMP).
signals, since the algorithm should be performed under the
underdetermined system model, fundamental performance gap
I. I NTRODUCTION from the overdetermined system is unavoidable.
A. System Model
The relationship between a transmit signal x and a received C. Detection via Sparse Signal Recovery Algorithm
signal vector y in many wireless communication systems can A better treatment of the problem at hand is to use sparse
be expressed as recovery algorithm. In essence, the goal of the sparse recovery
y = Hx + v (1) algorithm is to find the sparest set of atoms (column hi of H)
that best represents the observation y. In doing so, system
where H ∈ Cm×n is the system (channel) matrix, v ∼
model is converted from underdetermined to overdetermined
N (0, σv2 I) is the noise vector, and x is the input vector whose
and an accurate recovery of the original sparse signal is
entries are chosen from a finite set of integer Ω. In this work,
possible. While a dictionary (a collection of atoms) used in
we are primarily concerned with the scenario where 1) the
the signal/image processing (e.g., Fourier, Wavelet, Haar dic-
input signal x is sparse (i.e., number of nonzero elements in
tionaries) can be tailored to the design purpose, the dictionary
a signal vector is small) and 2) the dimension of observation
in the communication systems is mostly in the form of channel
vector y is smaller than that of the input vector (m < n). This
matrix and needs to be estimated using the known signal so
system, which in essence is modeled as the underdetermined
called the pilot signal.
sparse system, is prevalent and embraces many of modern
Over the years many algorithms to find the sparsest rep-
wireless communication systems such as wireless sensor net-
resentation of the observation vector from the (overcomplete)
work, source localization, multiuser detection, downlink in
dictionary have been proposed. Two popular approaches are
massive MIMO, relaying in ad hoc network, to name just a
ℓ1 -minimization technique and greedy pursuit algorithm:
few.
• ℓ1 -minimization: ℓ1 -minimization technique solves the

B. Conventional Detection problem


The traditional way of detecting the input signals is to use min ∥x∥1 s.t. ∥y − Hx∥2 < ϵ, (4)
the estimation technique such as least squares (LS) or linear
where ϵ > 0 (in the noiseless setting, ϵ = 0). In [3], it
minimum mean square error (LMMSE) estimation followed
is shown that if the noise power is limited to ϵ and the
by the symbol slicing. Let x̃ and x̂ be the output of LMMSE
number of observations is sufficiently large, ℓ2 -norm of
estimator and its sliced version, respectively, then
the reconstruction error is within the constant multiple of
x̃ = (HT H + σv2 I)−1 HT y (2) ϵ (i.e., ∥x̂−x∥2 < c0 ϵ). Basis pursuit de-noising (BPDN)
x̂ = QΩ (x̃) (3) [4], also called Lasso [5], relaxes the hard constraint on
the reconstruction error by introducing a soft weight λ as
B. Shim and S. Kwon are with Dept. of Electrical and Computer Engineer-
ing, Seoul National Univ., Seoul, Korea, 151-742 (email: bshim@snu.ac.kr), x̂ = arg min ∥y − Hx∥2 + λ∥x∥1 , (5)
and B. Song is with Dept. of Electronics Engineering, Seokyeong Univ., Seoul, x
Korea, 136-701 (email: bksong@skuniv.ac.kr). Since the ℓ1 -minimization problem is convex optimiza-
This research was supported by the research grant of Seokyeong University
(2013) and National Research Foundation of Korea grant funded by Korea tion problem, efficient solvers based on the linear pro-
government (MEST) No. 2013056520. gramming (LP) exist (e.g., BP-interior [4]). From our
2

∅ ∅

ܶ ଵ ={3} 1st iteration ܶଵଵ ={3} ܶଶଵ ={5}

ܶ ଶ ={3,1} 2nd iteration ܶଵଶ ={3,1} ܶଶଶ ={3,5} ܶଷଶ ={5,2}

ܶ ଷ ={3,1,5} 3rd iteration ܶଵଷ ={3,1,5} ܶଶଷ ={3,1,2} ܶଷଷ ={3,2,5} ܶସଷ ={5,2,4}

(a) OMP (b) MMP

Fig. 1. Comparison between the OMP and the MMP algorithm (L = 2 and K = 3). While OMP maintains a single candidate T k in each iteration, MMP
investigates multiple promising candidates Tjk (subscript j counts the candidate in the i-th iteration).

perspective, unfortunately, it is not easy to incorporate


0.14
the integer constraint into the LP based algorithm.
• Greedy Pursuit: Greedy pursuit attempts to find the
support (index set of nonzero entries) of the input vector1 0.12

in an iterative fashion, obtaining a sequence of possible


estimates (x̂1 , · · · , x̂n ). For example, orthogonal match- Path of OMP
0.1

ing pursuit (OMP) picks a column of the channel matrix


Error Magnitude

0.08
H one at a time using a greedy strategy [6]–[8]. In the k-
th iteration, the estimate x̂k is generated by projecting the 0.06
observation into the subspace spanned by the submatrix True path
contructed from chosen columns. 0.04

While the use of sparse recovery algorithm is desirable in


the sense that it exploits the sparsity of signal to be recovered, 0.02

in itself it does not exploit the property that the nonzero


element of x is from the set of finite alphabets. To exploit this 0
5 10 15 20 25 30
information, one can use the sliced output QΩ (x̂k ) instead of Iteration number

x̂k as an estimate in each iteration. However, just using the


slicer output QΩ (x̂k ) might not be effective, in particular for Fig. 2. Evolution of the error magnitude as a function of the iteration number
the sequential greedy algorithms like OMP, due to the error of the proposed sMMP algorithm. For illustration purpose, we plot only a few
propagation. For example, if an incorrect index is chosen in paths. While the quantization error introduced by the integer slicer affects
the incorrect paths (green colored paths in the figure), no such phenomenon
an iteration of OMP, then the estimate would be incorrect. happens to the true path. As a result, the performance difference between the
Furthermore, if this incorrect estimate is sliced, additional true path and incorrect paths grows drastically as an iteration goes on.
quantization error will be added on top of the estimation
error, exacerbating the quality of the subsequent operation.
For example, if xk = 0 and x̂k = 0.01, then QΩ (x̂k ) = 1
for Ω = {+1, −1} (and hence ∥xk − x̂k ∥ = 0.01 and
residual magnitude in the last minute [9]. As shown in Fig.
∥xk − QΩ (x̂k )∥ = 1).
1, each candidate brings forth L child candidates in the
MMP algorithm. In the k-th iteration, L indices t1 , · · · tL
II. M ULTIPATH M ATCHING P URSUIT (MMP) WITH whose columns are maximally correlated with the residual
S LICING are chosen and each of these indices, in conjunction with
A. MMP Algorithm previously selected indices, constructs a new candidate in
the next iteration. Let Tjk−1 = {t1 , · · · , tk−1 } be the j-th
While many of greedy recovery algorithms construct K-
candidate in the (k − 1)-th iteration, then the set of L indices
sparse estimate through the sequential process, recently pro-
chosen from Tjk−1 , denoted as T ∗ , is expressed as
posed algorithm referred to as multipath matching pursuit
(MMP) performs the parallel search to find multiple promising
candidates and then chooses the best one minimizing the
1 For example, if x = [1 0 0 0 2 0 4]T , then the support S is S = {1, 5, 7}. T ∗ = arg max ∥(Φ′ rk−1
j )T ∥22
|T |=L
3

TABLE I
2
T HE S MMP ALGORITHM 10

Input: observation y, channel matrix H, modulation set Ω


sparsity K, number of path L
Output: estimated signal x̂ 1
10
Initialization:
k := 0 (iteration index), r0 := y (initial residual), T 0 := {∅}

Recovery error
while k < K do
k := k + 1, u := 0, T k := ∅ 0
10
for i = 1 to |T k−1 | do
T ∗ := arg max ∥(H′ rk−1 i )T ∥22 (choose L best indices)
|T |=L
for j = 1 to L do
ttmp := tk−1
i ∪ {tj } (construct a temporary path) −1
10
if ttmp ̸∈ T k then (check if the path already exists) CV error
u := u + 1 (candidate index update) Residual
tku := ttmp (path update) MSE
T k := T k ∪ {tku } (update the set of path) −2
10
x̂ku := (HTtk
Htk )−1 HT
tk
y (perform estimation) 1 2 3 4 5 6 7 8 9 10 11
u u u Iterations
x̃ku := QΩ (x̂ku ) (perform slicing)
rku := y − Htk x̂ku (residual update)
u
end if Fig. 3. Snapshot of the recovery error as a function of the iteration number
end for (m = 12, n = 24, and K = 5).
end for
end while

u := arg minu ∥rK u ∥2
return x̂ = x̂ku∗ be affected considerably (in general, more harmful to under-
fitting).
Cross validation (CV) is a statistical technique to identify
where the residual is defined as rk−1
j = y − Hk−1 x̂k−1
j and the model order (sparsity level K in this work) and thus avoid
x̂k−1
j = H†k−1 y = (HTk−1 Hk−1 )−1 HTk−1 y, overfitting and underfitting of the parameter [10]. In CV, the
received vector y is divided into a training vector y(t) and a
Hk−1 = [ht1 ht2 · · · htk−1 ]. (6) validation vector y(v) , which are given respectively by

The corresponding child candidates become T {ti } for
k−1

i = 1, · · · , L. Although the number of candidates seems to y(t) = H(t) x + v(t) (7)


increase by the factor of L in each iteration, due to the fact that y(v) = H(v) x + v(v) . (8)
many candidates are overlapping (see Fig. 1), actual number
Using the training vector y(t) , a sequence of possible esti-
of child candidates is quite moderate. The main benefit of
mates (x̂1 , · · · , x̂n ) is generated. For each estimate x̂i , the
MMP, in the perspective of incorporating the integer slicer,
validation vector y(v) is used to compute the estimation error
is that it deteriorates the quality of incorrect candidate and
ϵk = ∥y(v) − H(v) x̂k ∥2 . When the algorithm is finished, an
at the same time improves the quality of correct one. This is
iteration count corresponding to the minimum validation error
because the quality of incorrect candidates gets worse due to
becomes the sparsity estimate (K̂ = arg mink ϵk ) and x̂K̂ is
the additional quantization noise caused by the slicing while
chosen as the final output.
no such phenomenon happens to the correct one. As a result, as
shown in Fig. 2, the difference of estimation quality, measured It is worth mentioning that in many applications, the residual
in terms of the estimation error ∥x − x̂∥2 , between the final based stopping criterion is widely used to identify the sparsity
output (mostly it corresponds to the true path) and the rest level (or iteration number) of the greedy algorithm. Basically,
(incorrect paths) increases as iteration goes on. The MMP this scheme terminates the algorithm when the residual power
algorithm with slicing, henceforth referred to as sMMP, is is smaller than the pre-specified threshold ϵ (i.e., ∥rk ∥2 < ϵ).
summarized in Table I. However, since the residual magnitude decreases monotoni-
cally and the rate of decay depends on the system parameters,
it is not easy to identify the optimal point. Whereas, the ℓ2 -
B. Multipath Matching Pursuit with Cross Validation norm of the validation error ϵi usually has minimum value
Thus far, we assumed that the sparsity of the input vector when an iteration number equals the sparsity level so that one
(K = ∥x∥0 = #{xi ; xi ̸= 0}) is known in advance. Indeed, can easily estimate the sparsity using CV (see Fig. 3). Finally,
many greedy pursuit algorithms implicitly assume that the we note that since multiple candidates are investigated in
sparsity of the input vector is known in a priori. Since this sMMP, a candidate with the minimum validation error among
assumption does not hold true in practice, absence of sparsity all candidates is chosen as the final output.
information can lead to either early or late termination of the
recovery algorithm. In the former case, the transmit signal
III. S IMULATIONS AND D ISCUSSIONS
will not be fully recovered (underfitting), while in the latter
case some of the noise vector is treated as the transmit signal This section describes the numerical experiments that illus-
(overfitting). In both scenarios, the reconstruction quality will trate effectiveness of the proposed approach. Other than the
4

proposed sMMP algorithm, we test the original MMP algo-


rithm, OMP algorithm (with and without slicing2 ), CoSaMP
algorithm [11], LMMSE estimation, and also Oracle LMMSE
estimation. Note that Oracle estimator knows the support −1
10
information in a priori and hence it solves the overdetermined
system y = HT xT + v where HT is the submatrix of H
containing columns indexed by T (xT is defined in the same
way). The performance of Oracle estimator is popularly used
as a lower bound of the recovery algorithms. −2

SER
10

Oracle MMSE
A. Simulation Setup MMSE
OMP
The simulation setup is based on 12 × 24 channel matrix OMPslice
−3
H whose entries drawn independently from complex Gaussian 10
CoSaMP
distribution CN (0, 1). Two sparsity levels (K = 3 and 5) are MMP
tested so that 12.5% and 20% of elements in x are nonzero. sMMP
The positions of nonzero elements (i.e., symbol positions) 24 26 28 30 32 34 36 38
are randomly selected and symbols of nonzero locations are SNR (dB)
chosen from 16-QAM modulation. We use the symbol error
rate (SER) as a performance measure. For each point in the Fig. 4. Performance of the proposed method for m = 12, n = 24, and
plot, we perform 24 trials for the CV operation and the average K = 3.
value of these trials is used as a sparsity estimate K̂. Also, to
measure the performance, we perform at least 5, 000 trials for
each point of the tested algorithm.

B. Simulation Results −1
10
Fig. 4 shows the SER performance as a function of
signal-to-noise-ratio (SNR). Since the system is underdeter-
mined, LMMSE estimator exploiting whole channel matrix
to estimate the sparse signal vector does not perform well. −2
10
SER

Whereas, performance of all sparse recovery algorithms we


tested improves with the SNR. Due to the fact that multiple Oracle MMSE
promising candidates are investigated, it is no wonder that the MMSE
OMP
MMP algorithm exhibits the best performance among sparse −3
10 OMP
recovery algorithms under test. As mentioned, since the slicing slice
CoSaMP
is effective in improving performance of MMP, the sMMP MMP
algorithm outperforms the original MMP by more than 1 dB sMMP
gain. In Fig. 5, we plot the performance for K = 5 (i.e., −4
10
20% of signal vectors is active). While the performance of 24 26 28 30 32 34 36 38
SNR (dB)
conventional sparse recovery algorithms is not so appealing
for this less sparse scenario, the proposed sMMP algorithm is
Fig. 5. Performance of the proposed method for m = 12, n = 24, and
effective and performs close to the Oracle estimator (around K = 5.
2 dB gap at 10−2 SER).

R EFERENCES [6] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching


pursuit: Recursive function approximation with applications to wavelet
[1] T. Cui and C. Tellambura, “An efficient generalized sphere decoder for decomposition,” in Proc. of Asilomar Conf., pp. 40-44, Nov. 1993.
rank-deficient MIMO systems,” IEEE Communications Letters, vol. 9, no. [7] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measure-
5, pp. 423-425, May 2005. ments via orthogonal matching pursuit,” IEEE Trans. Inform. Theory, vol.
[2] B. Shim and I. Kang, “Sphere decoding with a prbabilistic tree pruning,” 53, no. 12, pp. 4655-4666, Dec. 2007.
IEEE Trans. Signal Process., vol. 56, pp. 4867-4878, Oct. 2008. [8] J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching
[3] E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from pursuit,” IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6202-6216,
incomplete and inaccurate measurements,” Comm. Pure Appl. Math., vol. Dec. 2012.
59, no. 8, pp. 1207-1223, Aug. 2006. [9] S. Kwon, J. Wang, and B. Shim, “Multipath matching pursuit,” IEEE
[4] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition Trans. Inf. Theory, vol. 60, no. 5, pp. 2986-3001, May 2014.
by basis pursuit,” SIAM J. Scientif. Comput., vol. 20, no. 1, pp. 33-61, [10] R. Ward, “Compressed sensing with cross validation,” IEEE Trans. Inf.
1998. Theory, vol. 55, no. 12, pp. 5773-5782, Dec. 2009.
[5] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal [11] D. Needell and J. A. Tropp, “CoSaMP: iterative signal recovery from
of the Royal Statistical Society, Series B, vol. 58, no. 1, pp. 267-288, 1996. incomplete and inaccurate samples,” Commun. ACM, vol. 53, no. 12, pp.
93-100, Dec. 2010.
2 In the OMP algorithm with slicing, slicing is performed for each iteration.

You might also like