Not Guarantee Compression.: Peer Group Image Enhancement
Not Guarantee Compression.: Peer Group Image Enhancement
2, FEBRUARY 2001
1 1
Index Terms—Image enhancement, image smoothing, noise removal,
C sup (n; ) C sup 4 (n; ) = 1:
H inf 4 (n;
r )
=
2(1 0 ) (9) nonlinear filtering.
This implies that, first, for large n and > 0:5, predictive I. INTRODUCTION
models are guaranteed to not increase the entropy for any Noise removal and image smoothing are useful pre-processing steps
inter-image correlation. Second, an elegant numerical result in many image processing applications. A general objective in these
follows for the best-case predictive compression ratio. Since applications is to suppress noise while preserving the edge information.
C sup (n; 0 ) = C0 gives the lowest = 0 such that predictive Typically, additive Gaussian or impulse noise models are assumed.
models u ^, 0 —correlated to some images u in (1), can deliver Median filtering is a popular choice for impulse noise removal (see
C0 effective compression (see Proposition 1), solving (9) for the tutorial paper [24] by Yin et al. which includes an extensive bibli-
yields the following. ography for this area). Median filtering forms an approximation u of
Proposition 3: Effective compression ratio C = C0 (2) cannot be an image g by passing a window of size d 2 d over g and taking the
achieved with (1) when model-to-image correlation median intensity of the window at pixel location i as the value for ui .
The motivation for this type of filtering is that the median preserves
edges (intensity discontinuities). Other related work can be found in
^) < 1 0
1
(u; u : [1], [10], [11]. Extensions to color and multidimensional signals in-
2C0
clude the vector median filter (VMF) [2], vector directional filtering
(VDF) [22], and the directional distance filtering (DDF) [8]. The DDF
This holds true for any number of intensity levels n. In particular, is a combination of VMF and VDF. A common drawback of all these
any predictive compression is impossible for (u; u
^) < 0:5 (C0 = 1); above methods is that they are typically implemented uniformly across
^) < 0:75 (C0 = 2), and 3 : 1
2 : 1 compression is impossible for (u; u the image and tend to modify pixels that are not corrupted by noise.
compression is impossible for (u; u ^) < 5=6 (C0 = 3) (compare to In [3] a Teager-like operator is used to first detect the outliers so that
Proposition 1 and Table I). only the noisy pixels are replaced. The detection is performed in each
individual color component which may lead to errors in the overall
color. For the case of mixed Gaussian and impulse noise, an adaptive
nonlinear multivariate filtering method is proposed in [21]. It uses the
mean value within a local neighborhood of pixels to estimate the orig-
REFERENCES inal pixel value and hence may blur the edges and the details.
Other approaches to smoothing while preserving boundaries include
variational methods and shock filtering. A standard variational ap-
[1] N. Memon and X. Wu, “Recent developments in context-based predic-
tive techniques for lossless image compression,” Comput. J., vol. 40, no.
proach [12, p. 24], [13], [14], [7] to segmenting and approximating an
2/3, 1997. image g consists of finding an approximation u and a boundary set K
[2] M. Weinberger, G. Seroussi, and G. Sapiro, “The LOCO-I Lossless that minimizes an objective functional that has two components, one
Image Compression Algorithm: Principles and Standardization into corresponding to the error between the approximation and the original,
JPEG-LS,” Computer Systems Laboratory, Hewlett-Packard, Palo Alto, and the other related to the length of the boundary. Functionals of
CA, HPL-98-193, Nov. 1998.
[3] X. Wu and N. Memon, “Context-based, adaptive, lossless image this type are often referred to as the Mumford–Shah functionals.
coding,” IEEE Trans. Commun., vol. 45, Apr. 1997.
[4] M. Rabbani and P. W. Jones, Digital Image Compression Tech-
niques. New York: SPIE Opt. Eng. Press, 1991. Manuscript received March 31, 1999; revised July 6, 2000. This work was
[5] O. S. Pianykh, “Lossless predictive compression of correlated data,” supported by the Office of Naval Research under ONR Grant N00014-96-1-
Ph.D. dissertation, Louisiana State Univ., Alexandria, May 1998. 0456. The associate editor coordinating the review of this manuscript and ap-
proving it for publication was Prof. Scott T. Acton.
C. Kenney, Y. Deng, and B. S. Manjunath are with the Department of Elec-
trical and Computer Engineering, University of California, Santa Barbara, CA
93106 USA (e-mail: manj@ece.ucsb.edu).
G. Hewer is with the Naval Air Warfare Center, China Lake, CA 93555 USA.
Publisher Item Identifier S 1057-7149(01)00100-2.
The weights associated with the two components play a significant This is the 1-D version of the idea that the zeros of the Laplacian serve
role in the resulting enhancement quality, and an often encountered as boundary points for images. While the peer group size two is too
problem is the selection of these weights. It is not clear how to select small to be of practical value, the primary motivation for the following
good values for the weights associated with the smoothness and analysis is that it establishes a connection between PGA and shock fil-
approximation terms. tering (see Section III-A).
In shock filtering [17]–[19], intensity values from the interior of Assume that g is a monotonically increasing 1-D signal. The peer
regions move outward toward the region edges along gradient lines. group for pixel i is determined by nearness of intensity values to gi ; in
The convexity of the intensity along the gradient direction determines cases of ambiguity caused by equal intensity differences, we select the
the motion direction along the gradient and this direction assignment peer group to minimize the distance of the peer group members from
means that when two regions meet at an edge, the image intensity will pixel i, with preference to the right if necessary.
experience a jump. Thus the edges of the image correspond to sta- Lemma 1: Let g be monotonically increasing over [1; n] with a
tionary shock fronts for this type of image processing. Note that in change in convexity at k : gi01 0 2gi + gi+1 > 0 for i k , and
shock filtering the maximum values of the image intensity and the min- gi01 0 2gi + gi+1 < 0 for i > k . Then the PGA algorithm with
imum values move outward from the interior of their regions to meet at peer groups of size 2 and windows of size 3 applied to g converges to
the boundaries. This means that the contrast at the edges is maximized. a bimodal piecewise constant function u. The point where g changes
This also means that shock filtering preserves the total variation of the convexity (x = k ) is also the boundary point for the two constant re-
original image. Shock filtering smoothes in the sense that each region gions of u: ui = (g1 + g2 )=2 for i k , and ui = (gn01 + gn )=2 for
assumes a constant value. However, shock filtering does not remove i > k.
isolated noise such as salt-and-pepper noise, as discussed by Osher and The proof is outlined in Appendix A. For general signals, if g is
Rudin in [17]. convex or concave in the interval I defined by k0 i k1 then the
We propose a nonlinear algorithm for image smoothing and impulse PGA algorithm with peer group size 2 and window size 3 converges
noise removal that addresses the above mentioned drawbacks of the to a constant value c in the interval I . Thus the PGA algorithm con-
current methods. The proposed method is based on the idea that each verges to a piecewise constant function with the regions of constancy
pixel has a peer group of associated nearby pixels. The peer group is determined by the convexity breakpoints (i.e. those points for which
then used to modify the value of the pixel. gi01 0 2gi + gi+1 changes sign) of the original function g . This is il-
There are many ways to select the peer group for a given pixel. For lustrated in Example 2 which compares PGA and median filtering for
example, see the earlier work by Yaroslavsky [23] presenting an ab- a Gaussian hump.
stract formulation of the group idea. In general, peer group members
should share some common values. For a single image, the peer group B. Convergence for Fixed PGA
may be nearby pixels with similar intensity values. For a sequence of In the course of testing numerical examples under the PGA algo-
images used in determining optical flow fields, the peer group can be rithm we observed that the peer groups can change significantly during
nearby pixels (in time and space) with similar intensity values and sim- the first few iterations. This is then followed by a period during which
ilar velocity values. In another context, texture values may be assigned there is little or no change in the peer group structure. The nonlinearity
to each pixel and the peer group determined by nearness in texture of the PGA algorithm is tied solely to the selection of the peer group
space. In this paper we will use the following peer group definition. members. For ease of analysis we consider a modified version of the
Definition: For an image g , the peer group P (n; d) associated with
a pixel i consists of the n pixels in a d 2 d window centered at i that are
PGA algorithm in which membership in the peer groups is fixed after
a certain number of regular PGA steps. The following gives a conver-
nearest in intensity to g (i). Peer group averaging (PGA) is the process gence result for this “fixed” PGA approach and applies to both 1-D and
of replacing g (i) with the average over its peer group P (n; d). 2-D signals (images).
In the next section we discuss the stability and convergence of the Lemma 2: Let the sequence uk be generated using the PGA algo-
PGA method. The interplay between the window size (the parameter rithm, starting from u0 = g , with fixed peer groups for each pixel after
d in the above definition), peer group size, and the characteristics of
iteration k0 . Then the sequence uk converges to a limiting image u.
the image objects is discussed in Section III. This is followed by a pro- Proof: After the peer groups are fixed the PGA iteration can be
cedure for automatically selecting the peer group size using the Fisher written as xk+1 = Axk where the vector xk is formed by stacking
discriminant in Section IV. In Section V we extend the PGA to color the columns of the image uk and the matrix A does the peer group
images. Section VI presents a multilevel approach to PGA. We con- averaging. (Thus PGA is linear after the peer groups are fixed. This
clude in Section VII with discussions. permits the following analysis.) The entries of the rows of A sum to 1
and consist of either 0 or 1=n, where n is the peer group number. We
II. CONVERGENCE OF PGA say that a subset S of pixels is strongly connected or irreducible if 1)
i 2 S implies that the peer group for i is also in S , and 2) for any i and
PGA is stable in the sense that the new pixel value at i must lie be-
j in S there is a path in S from i to j and a path FROM j to i. That
one; see [16, Ths. 6.1.4 and 6.1.5 p. 219].) Because the row sums are 1 the entire window is the peer group. Averaging over the entire window
we have max (A~) = 1 with associated eigenvector v = (1; 1 1 1 ; 1)T . blurs the image. Informally we can say then that as the peer group size
Thus x~m+k = A ~m x
~k , which is just the power method [20]. Since increases so does the smoothing effect under PGA. This leads us to the
the dominant eigenspace has dimension one, the power method con- rule of thumb: to obtain maximal smoothing while preserving an object
verges and x ~k converges to x ~ = cv where c is the projection of xk of size n(O) use a peer group of size n = n(O). For more details, we
onto v ; i.e., c is the average of the entries of xk . refer to [5].
Now repeat this for each irreducible subset of pixels. From this anal- We now consider some examples illustrating the performance of
ysis we see that the limiting image u under the PGA algorithm is con- PGA on 1-D and 2-D signals.
stant on each primary irreducible region of the image. Example 1: This is a signal consisting of two steps of different
The remaining pixel intensities satisfy a recursion of the form heights and widths; see Fig. 1(a). Fig. 1(b) shows the original signal
x^k+1 = A^x^k + ^bk where ^bk represents the contribution to the aver- plus N (0; 3) noise. The result of using PGA on the noisy data with
aging of the irreducible sets of pixels. Again determine the irreducible n = 9 and d = 17 is shown in Fig. 1(c). Since the peer group number
subsets of pixels with respect to A^. These subsets are the secondary is less than the object number (the intervals are of length 10 and 20
irreducible subsets of the image. The PGA iteration on these subsets respectively) the steps are preserved. However, when we set the peer
has the form xk+1 = Axk + bk where A is irreducible with positive group number to n = 11 which is larger than the shorter of the step
main diagonal entries equal to 1=n where n is the peer group number. intervals then as expected we lose the small step, as seen in Fig. 1(d).
Since at least one row of A sums to less than 1 (this is true of any Here we used d = 2n 0 1 = 21 for the window diameter.
row with a nonzero entry in bk ) the spectral radius of A is less than Fig. 1(e) and (f) shows the results of applying adaptive PGA (de-
1 by the Perron–Frobenius Theorem (see [16, Theorems 6.1.4 and scribed in Section IV) for windows of sizes d = 11 and d = 21 re-
6.1.5 p. 219]). Let x = (I 0 A)01 b. Then xk+1 0 x = A(xk 0 x). spectively with the best peer group number selected at each pixel from
Since (A) < 1 the difference xk 0 x must converge to zero, i.e., xk the range nlower = (d + 1)=2 to nupper = d 0 1. These choices on
converges to x. the lower range have the effect of eliminating objects of size less than
Thus the limiting image u on the secondary irreducible subsets of the nlower and hence when nlower = 6 as in Fig. 1(e) the steps are pre-
image is determined by the constant values on the primary irreducible served. However when nlower = 11 as in Fig. 1(f) then the steps of
subsets. Repeating this process for tertiary and higher irreducible sets size 10 are lost.
accounts for all the pixels and completes the proof. We also note that a similar example has been studied by Oman
[15] using a variety of approximation methods including Sobolev H 1
reconstruction, total variation approximation, low pass Fourier recon-
III. PROPERTIES OF PGA AND PARAMETER SELECTION struction, and wavelet methods (in which de-noising in the manner
For PGA there are two parameters: window size d 2 d and peer group of Donoho and Johnstone [6] was used for Harr and Daubechies
number n. We can make some general points about selecting the peer wavelets). The PGA results for n = 3 are superior to (or approxi-
group size by considering a particular example. First suppose that in mately the same in the case of the total variation method) the results
a window of size d, the central pixel is part of group of N pixels of reported by Oman. It can be further noted that the PGA results do not
the same intensity. Let us refer to this group of pixels as O (for ob- use any estimate of the variance of the noise in the data.
ject) and define n(O) = N as the number of pixels in O . For example The next example compares median filtering with PGA for a
O might be a line or corner or even a disconnected group of pixels in Gaussian hump.
the window. If the rest of the pixels in the window have intensities that Example 2: Let g be the Gaussian function g (x) = e0x =2 .
differ from the common intensity in O then the result of peer group Fig. 2(a) shows g for = 10. Fig. 2(b) shows the PGA approxima-
averaging depends critically on whether the peer group number n is tion u for n = 2 and d = 3. The break points in u occur at x = 6
greater than n(O) or not. If n n(O) then the peer group for the cen- which are also the zero-crossings of the Laplacian of g . In contrast the
tral pixel will consist of pixels in O and the average over the peer group median filter results do not depend on the variance of the Gaussian; in-
is equal to the common intensity value over O . However if n > n(O) stead the breakpoints are determined entirely by the window diameter
then the peer group must include some pixels outside of O . Potentially as seen in Fig. 2(c) for a window of size d = 11 and Fig. 2(d) for a
the average might still equal the common value over O if high and low window of size d = 51.
values cancel but in the generic case the average over the peer group Example 3: Fig. 3(a) shows a lace image from the Brodatz texture
will differ from the common value over O . Thus a necessary condition album. This image gives a nice illustration of how to preserve lines
for preserving the intensity of the central pixel is n n(O). while smoothing interior regions. The thin stems in the image are 2
To illustrate, in a 3 2 3 window, if O is a straight line of width one pixels wide. Using n = wd to preserve lines of width w in a d 2 d
passing through the central pixel then n(O) = 3 and taking the peer window we selected n = 6 for a 3 2 3 window (using 5 PGA itera-
group number n 3 preserves the intensity value for the central pixel tions). The results are seen in Fig. 3(b). Note that the interiors of the
under one step of PGA. On the other hand taking n > 3 introduces leaf regions are smoothed without loss of boundary definition and the
some blurring to the central pixel. For this example, taking n 3 only stems are preserved. However where the stems neck-down to lines that
guarantees invariance for the central pixel for one PGA step since the are just one pixel wide, PGA with n = 6 produces a break in the stem.
other pixels on the object may suffer from a window occlusion effect This indicates the need for an adaptive scheme to select the peer group
and thus change their value. For example if the line terminates at some size at each pixel (see Section IV). For this image we restricted the
point then in a 3 2 3 window with the end of the line at the window’s peer group size to lie between 3 pixels (since this preserves lines of
center, the line has only 2 pixels in the window. In this case the n(O) one pixel width in a 3 2 3 window) and 8 (for smoothing the interior
has shifted to 2 and we would need n 2 to preserve the central pixel leaf regions); as can be seen the stems now have no breaks.
intensity.
If we consider the effect of peer group size relative to the window A. PGA and Shock Filtering
diameter we see that for the smallest window size n = 1 the peer group In its simplest form for signals, shock filtering uses the orig-
is just the central pixel and thus the average over the peer group pro- inal signal g as initial data for a nonlinear convection equation:
duces no change. The largest peer group size is n = d2 in which case ut = 0sgn(uxx )ux with u(x; 0) = g(x). In this formulation we
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 2, FEBRUARY 2001 329
= 0
Fig. 1. PGA parameter choice effects for a noisy step function. The window size is d 2n 1 (see text for details) where n is the peer group size. (a) Original
signal with two steps, (b) signal with additive Gaussian noise (zero mean and standard deviation three), (c) PGA result with n = 9, (d) PGA result for n = 11,
and (e)–(f) the results with automatic parameter selection for window sizes d = 11 and d = 21, respectively.
Consider a simple Euler update scheme for the shock filter equation:
let h be the time step and set unewi = ui + hut . If u is monotone
increasing at i and uxx < 0 in the sense that ui+1 0 2ui + ui01 < 0
then the choice h = 1=2 leads to uinew = (ui + ui+1 )=2. This is the
same result we would get with PGA for a peer group of size n = 2
because the convexity condition ui+1 0 2ui + ui01 < 0 is the same as
jui+1 0 ui j < jui 0 ui01 j when u is monotone increasing. Similarly,
if uxx > 0 the choice h = 1=2 in the shock filter Euler update leads
to the same result as the PGA update: uinew = (ui01 + ui )=2.
This intersection of shock filtering and PGA for particular param-
eter choices means that results for one method apply immediately to
the other. For example, PGA with n = 2 for signals is total varia-
tion preserving because the same is true for shock filtering. However,
the two methods are not the same for other choices of parameters. In
particular PGA with larger peer group sizes automatically incorporates
smoothing over the peer group and is able to handle problems such as
the isolated intensity spikes of salt and pepper noise.
Fig. 2. Comparing PGA with median filtering. (a) Gaussian hump with =
6
10, (b) PGA result with n = 2 and d = 3. The break points occur at x = . IV. AUTOMATIC PARAMETER SELECTION
In contrast, the break points for median filtering are detwermined entirely by
the window size d, as shown in (c) for d = 11, and (d) for d = 51. Although the preceding observations make it possible to predict in a
general way how the peer group size affects the smoothing under PGA,
it is still the case that in most images we want to vary the peer group
must be careful to form derivative approximations from the appropriate size from point to point in order to enhance some features and smooth
direction. Thus if intensity information is to move from right to left, others. For example, if we use a 3 2 3 window then a peer group of size
then we want ux to represent the right hand derivative and we use a 6 preserves straight edges but not corners. If we lower the peer group
forward difference to approximate ux . Similarly we use a backward number to size 4, then corners are also preserved but we do not achieve
difference if we want intensity information to move from left to right. the smoothing that we see with n = 6.
330 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 2, FEBRUARY 2001
as shown in Fig. 4(c). To remedy this we use PGA with the Fisher
discriminant maximized over the peer group size 4 n 8 since
n = 4 preserves corners. The result is seen in Fig. 4(d). The corner is
now preserved and the noise is well damped. Fig. 5 shows the values
of the Fisher discriminant objective function for the corner pixel with
the peer group number varying between 2 and 8. The clear maximum
at n = 4 indicates the presence of a corner.
=
Fig. 4. Noisy step example. (a) Noisy step, (b) PGA result with n 6 blurs the corner, (c) PGA with n = 3 preserves the corner but the smoothing is not good,
and (d) the adaptive PGA method preserves the corner as well as smooths the noisy step.
TABLE II
SNR (dB) FOR “PEPPER” IMAGE
Fig. 6. (a) Small area of the original “baboon” image, (b) same area of the corrupted image, (c) result of the vector median filtering, (d) result of Teager-operator
method, and (e) result of peer group processing.
Fig. 7. (a) Part of the original “baboon” image, (b) result of PGA, and (c) result of Gaussian filtering.
i and j ; this would reduce the computational cost by a factor of 1=k2 Example 7: Fig. 8(a) shows a satellite image of an agricultural area.
at the k th level while producing nearly the same smoothing effect over In Fig. 8(b) we see the result of regular PGA with a 3 2 3 window
large distances when iterating over several levels. and peer group n = 6. Fig. 8(c) shows the result of using multilevel
Alternating the PGA iteration between levels results in speeding the PGA with 3 2 3 windows and peer group size n = 6 with m = 4.
passage of intensity information within regions. As a side note we ob- Note the enhanced smoothing within regions and the elimination of
serve that there is a simple way to implement the PGA interaction at the small sensor artifacts which show up as a series of white blobs
level k without modifying the original PGA program. For example, to running down the center of the original image. Since these blobs are
do a PGA iteration with a distance of 2 between pixels in each window, rather small r < 2m they are attenuated by the averaging process under
one simply has to subsample the image skipping every other pixel and subsampling. This suggests that multilevel PGA could be used in an
then run regular PGA on the subsampled image. Subsampling this way object detection top-hat procedure.
transforms a large image into four smaller images denoted by g11 , g12 ,
g21 and g22 depending on whether the first pixel in the subsample is
(1, 1), (1, 2), (2, 1) or (2, 2). After doing one PGA iteration on each of
the smaller images they are then recombined into a larger image. Sim- VII. CONCLUSIONS
ilarly for level k we can break g into k2 smaller images, run PGA on
each of the smaller images and then recombine these smaller images A peer group image processing method has been presented that
into one larger image. has several natural advantages over competing methods. Automatic
There are a variety of ways to implement a multilevel PGA scheme. local parameter selection allows the adaptive form of PGA to preserve
In our experiments we used the following procedure: do one PGA it- edges and corners while obtaining significant smoothing over uniform
eration at level 1, then one iteration each at levels 2; 4; 1 1 1 ; 2m . This regions in the image. The computational effort of adaptive PGA
is referred to as a PGA cycle through level 2m . Repeat starting at level is many times less than that of PDE based procedures (such as
1. The selection of the final level 2m is determined by the amount of variational methods) that achieve similar results. PGA also has the
overall smoothing that we want. Extended objects in the image are pre- advantage of ease of implementation and extension to multilevel and
served and smaller objects are eliminated. The extent to which this oc- vector applications. The simplicity of the underlying idea allows
curs can be partially analyzed by noting that an object O of radius r analytic results to be obtained concerning the convergence of the
will necessarily change in intensity (in the generic case) when doing PGA iteration, relations to other methods such as shock filtering
PGA at level k > r because the peer group for any pixel in O will and median filtering, and how the choice of parameters affects the
include pixels outside O . resulting approximation. The parameters of the PGA method are
In practice we found that this multilevel PGA method converged directly related to the characteristics of the image features that are to
rather quickly and that little additional change occurred after three to be enhanced. This is in contrast to most image enhancement methods
five cycles for m = 4. More details and applications can be found in in which it is unclear how to select the method parameters or weights
[5] and [9]. to achieve a desired result.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 2, FEBRUARY 2001 333
F (n) =
j 1 (n) 0 A2 (n)j
A
1 (n) + V2 (n)
V
where A1 (n) and V1 (n) are respectively the average and variance over
r1 ; 1 1 1 ; rn ; A2 (n) and V2 (n) are respectively the average and vari-
Fig. 8. Comparing (a) the original image, (b) after PGA, and (c) after
ance over rn+1 ; 1 1 1 ; rn . However we can exploit update formulas
multilevel PGA.
for the averages and variances as n changes to n + 1 to avoid computa-
APPENDIX A tional redundancy. In this case we find that the total cost of computing
F (1); F (2); 1 1 1 ; F (nd ) is
PROOF OF LEMMA 1
For i k , the convexity assumption can be written as gi+1 0 gi > C Fisher = 11nd :
gi 0 gi01 . Since g is monotonically increasing, both sides of this in-
equality are nonnegative. Consequently, the for i k , the value gi is
Selecting the peer group number with the maximal Fisher value re-
quires another nd comparisons. The average for this group has already
closer to gi01 than to gi+1 . This means that for 2 i k , the peer
group for pixel i consists of pixels i 0 1 and i; i.e., we are averaging to
been calculated so we find the total cost per pixel of one adaptive PGA
iteration is Ctotal = Cdiff + Csort + CFisher + nd
the left. For i = 1, the peer group consists of pixels 1 and 2. It is easy
to show that both the monotonicity and convexity of the function are C total = I d + O(nd log nd ) + 12nd :
D n
preserved and the peer groups remained fixed. In particular, the peer
group for pixels 1 and 2 are the same. Thus, the intensity values for
these pixels converge in one step to their average and then remain con- ACKNOWLEDGMENT
stant. Since the pixels 2 < i k are left averaging, they converge to C. Kenney would like to thank Prof. S. Osher for pointing out
the common value of pixels 1 and 2; i.e. to (g1 + g2 )=2. Similar argu- the connection between PGA and shock filtering. The authors also
ments show that the function values of pixels k < i n converge to thank the reviewers for their constructive criticisms and Prof. S.
the common value (gn01 + gn )=2 which completes the proof. Chandrasekaran for many fruitful discussions.
APPENDIX B REFERENCES
OPERATION COUNTS FOR PGA AND MULTILEVEL PGA [1] E. Abreu, M. Lightstone, S. K. Mitra, and K. Arakawa, “A new efficient
approach for the removal of impulse noise from highly corrupted im-
In order to find the PGA replacement value for pixel i from a window ages,” IEEE Trans. Image Processing, vol. 5, pp. 1012–1025, 1996.
of size d (note: we use d to denote both 1-D and 2-D data. For 2-D size d [2] J. Astola, P. Haavisto, and Y. Neuvo, “Vector median filters,” Proc.
implies a window of d 2 d pixels centered at i), and a peer group of size
IEEE, vol. 78, pp. 678–89, Apr. 1990.
n, we first must form the differences dij = jg (i) 0 g (j )j. Here, i and j
[3] F. A. Cheikh, R. Hamila, M. Gabbouj, and J. Astola, “Impulse noise
removal in highly corrupted color images,” in Proc. Int. Conf. Image
index the pixel locations. We assume that the intensity may be a vector Processing, vol. 1, 1996, pp. 997–1000.
334 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 2, FEBRUARY 2001
[4] Y. Deng, C. Kenney, M. S. Moore, and B. S. Manjunath, “Peer group [14] D. Mumford and J. Shah, “Optimal approximation by piecewise smooth
filtering and perceptual color image quantization,” in Proc. IEEE Int. functions and associated variational problems,” Commun. Pure Appl.
Symp. Circuits and Systems, vol. 4, 1999, pp. 21–24. Math., vol. XLII, no. 4, 1989.
[5] Y. Deng, G. Hewer, C. Kenney, and B. Manjunath, “Peer group image [15] M. Oman, “Study of variational methods applied to noisy step data,”,
processing,” Univ. Calif., Santa Barbara, ECE Tech. Rep., Nov. 1999. 1996.
[6] D. L. Donoho and I. M. Johnstone, “Ideal de-noising in a basis chosen [16] J. M. Ortega, Matrix Theory A Second Course. New York: Plenum,
from a library of orthogonal bases,” Comptes Rendus Acad. Sci. Paris 1987.
A, vol. 319, pp. 1317–1322, 1994. [17] S. Osher and L. Rudin, “Feature-oriented image enhancement using
[7] G. Hewer, C. Kenney, and B. Manjunath, “Variational image segmenta- shock filters,” SIAM J. Numer. Anal., vol. 27, pp. 919–940, 1990.
tion using boundary functions,” IEEE Trans. Image Processing, vol. 7, [18] S. Osher and L. Rudin, “Shocks and other nonlinear filtering applied to
pp. 1269–1282, Sept. 1998. image processing,” in Proc. SPIE Appl. Dig. Image Proc. XIV, vol. 1567,
[8] D. G. Krakos and P. E. Trahanias, “Combining vector median and vector 1991, pp. 414–430.
directional filters: The directional distance filters,” in Proc. of ICIP, vol. [19] L. Rudin, “Images, numerical analysis of singularities, and shock fil-
1, 1995, pp. 171–174. ters,” Ph.D. dissertation, Computer Science Dept., Caltech, Pasadena,
[9] C. Kenney and G. Hewer, “Partial differential equations for multiscale CA, 1987.
image and video processing,” in Applications of Mathematical Signal [20] G. W. Stewart, Introduction to Matrix Computations. New York: Aca-
Processing Techniques to Mission Systems, Nov. 1999, RTO Lecture Se- demic, 1973.
ries 216, pp. 6-1–6-13. [21] K. Tang, J. Astola, and Y. Neuvo, “Adaptive nonlinear multivariate
[10] H. G. Longbotham and D. Eberly, “Statistical properties, fixed points, image filtering for mixed noise removal,” in Proc. ISCAS, vol. 1, 1993,
and decomposition with WMMR filters,” J. Math. Imag. Vis., vol. 2, pp. pp. 427–430.
99–116, 1992. [22] P. E. Trahanias and A. N. Venetsanopoulos, “Vector directional fil-
[11] H. G. Longbotham and D. Eberly, “WMMR filters: A class of ro- ters—A new class of multichannel image process filters,” IEEE Trans.
bust edge enhancers,” IEEE Trans. Signal Processing, vol. 41, pp. Image Processing, vol. 2, no. 4, pp. 528–534, 1993.
1680–1684, 1992. [23] L. P. Yaroslavsky, “Linear and rank adaptive filters for picture pro-
[12] J. Morel and S. Solimini, Variational Methods in Image Segmenta- cessing,” in Digital Image Processing and Computer Graphics: Theory
tion. Boston, MA: Birkhauser, 1995. and Applications, L. Dimitrov, E. Wenger, W. Muenchen, and R.
[13] D. Mumford and J. Shah, “Boundary detection by minimizing func- Oldenburg, Eds., 1991, p. 374.
tionals,” in IEEE Conf. Computer Vision Pattern Recognition, San Fran- [24] L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo, “Weighted median filters:
cisco, CA, 1985, pp. 22–26. A tutorial,” IEEE Trans. Circuits Syst. II, vol. 43, pp. 157–192, 1996.