0% found this document useful (0 votes)
50 views25 pages

Blind Motion Deblurring Using Multiple Images: A, B, A B

This document proposes a method for blind motion deblurring using multiple images. It explores the sparsity of motion blur kernels and clear images in certain domains to simultaneously estimate blur kernels from given blurred images and restore a clear image. A modified linearized Bregman iteration is used to efficiently solve the resulting minimization problem. The method is robust to image noise and alignment errors between images, requiring minimal accuracy in image alignment. It can accurately estimate complex blur kernels and automatically recover a high-quality clear image from multiple blurred images.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views25 pages

Blind Motion Deblurring Using Multiple Images: A, B, A B

This document proposes a method for blind motion deblurring using multiple images. It explores the sparsity of motion blur kernels and clear images in certain domains to simultaneously estimate blur kernels from given blurred images and restore a clear image. A modified linearized Bregman iteration is used to efficiently solve the resulting minimization problem. The method is robust to image noise and alignment errors between images, requiring minimal accuracy in image alignment. It can accurately estimate complex blur kernels and automatically recover a high-quality clear image from multiple blurred images.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Blind motion deblurring using multiple images

Jian-Feng Cai
a,
, Hui Ji
b,
, Chaoqiang Liu
a
, Zuowei Shen
b
a
Center for Wavelets, Approx. and Info. Proc., National University of Singapore,
Singapore, 117542
b
Department of Mathematics, National University of Singapore, Singapore, 117542
Abstract
Recovery of degraded images due to motion blurring is one challenging problem
in digital imaging. Most existing techniques on blind deblurring are not capa-
ble of removing complex motion blurring from the blurred images of complex
structures. One promising approach is to recover the clear image using multiple
images captured for the scene. However, it is observed in practice that such a
multi-frame approach can recover a high-quality clear image of the scene only af-
ter multiple blurred image frames are accurately aligned during pre-processing,
which is a very challenging task even with user interactions. In this paper, by
exploring the sparsity of the motion blur kernel and the clear image under cer-
tain domains, we proposed an alternative iteration approach to simultaneously
identify the blur kernels of given blurred images and restore a clear image. Our
proposed approach not only is robust to image formation noises, but also is
robust to the alignment errors among multiple images. A modied version of
linearized Bregman iteration is then developed to eciently solve the resulting
minimization problem. The experiments showed that our proposed algorithm
is capable of accurately estimating the blur kernels of complex camera motions
with minimal requirements on the accuracy of image alignment. As a result,
our method is capable of automatically recovering a high-quality clear image
from multiple blurred images.
Key words: blind deconvolution, tight frame, motion blur, image restoration
1. Introduction
Motion blurring caused by camera shake has been one of the prime causes of
poor image quality in digital imaging, especially when using telephoto lenses or
long shuttle speeds. In many imaging applications, there is simply not enough
light to produce a clear image by using a short shutter speed. As a result, the
image will appear blurry due to the relative motion between the camera and the

Corresponding author. Tel.: +65-65168845, Fax: +65-67795452


Email addresses: tslcaij@nus.edu.sg (Jian-Feng Cai), matjh@nus.edu.sg (Hui Ji),
tslcql@nus.edu.sg (Chaoqiang Liu), matzuows@nus.edu.sg (Zuowei Shen)
Preprint submitted to Elsevier March 4, 2010
scene. Motion blurring can signicantly degrade the visual quality of images.
Thus, how to restore motion-blurred images has long been a fundamental prob-
lem in digital imaging. Motion blurring due to camera shake is usually modeled
as a spatially invariant convolution process:
f = g p + n, (1)
where is the convolution operator, g is the clear image to recover, f is the
observed blurred image, p is the blur kernel (or so-called point spread function),
and n is the noise. How to recover the clear image g from the blurred image f
is the so-called image deconvolution problem.
There are two cases in image deconvolution problems: non-blind deconvolu-
tion and blind deconvolution. In the non-blind case, the blur kernel p is assumed
to be known or estimated somewhere else, and the task is to recover the clear
image g by reversing the eect of convolution on the blurred image f. Such a
deconvolution is known as an ill-conditioned problem, as a small perturbation on
f may cause the direct solution from (1) being heavily distorted. In past, there
have been extensive studies on robust non-blind deconvolution algorithms (e.g.
[1, 22, 8, 7, 18]). In the case of blind deconvolution, both the blur kernel p and
the clear image g are unknown. Then the problem becomes under-constrained
and there exist innitely many solutions. In general, blind deconvolution is
much more challenging than non-blind deconvolution. Motion deblurring is a
typical blind deblurring problem, because the motion between the camera and
the scene always varies for dierent images.
Some prior assumptions on both the kernel p and the image g have to be
made in order to eliminate the ambiguities between the kernel and the image.
In practice, the motion-blur kernel is very dierent from the kernels of other
types of blurring (e.g. out-of-focus blurring, Gaussian-type optical blurring),
as there exist not simple parametric forms to represent motion-blur kernels. In
general, the motion-blur kernel can be expressed as
p = v(x, y)[
C
, (2)
where C is a continuous curve of nite length in R
2
which denotes the camera
trajectory and v(x, y) is the speed function which varies along C. Briey, the
motion-blur kernel p is a smooth function with the support of a continuous
curve.
1.1. Previous work
Early works on motion deblurring usually use only one single blurred image.
Most such methods (e.g. [10, 29, 21, 17]) require a prior parametric knowledge
of the blur kernel p such that the blur kernel can be obtained by only esti-
mating a few parameters. These methods usually are computationally ecient
but only work on simple blurrings such as symmetric optical blurring or sim-
ple motion blurring of constant velocity. To remove more complicated blurring
from images, an alternative approach is using a joint minimization model to
2
simultaneously estimate both the blur kernel and the clear image. To overcome
the inherent ambiguities between the blur kernel and the clear image, certain
regularization terms have to be added in the minimization, e.g., total variation
(TV) regularization proposed by [9, 8, 16, 20]. These TV-based blind decon-
volution techniques showed good performance on removing certain blurrings on
specic types of images, such as out-of-focus blurring on medical images and
satellite images. However, TV regularization is not a good choice for the case of
motion-blurring, because TV regularization penalizes, e.g., the total length of
the edges for piecewise constant functions (see [8]). As a result, the support of
the resulting blur kernel tends to be a disk or several isolated disks, instead of
a continuous curvy camera trajectory. Also, for many images of nature scenes,
TV-based regularization does not preserve the details and textures very well on
the regions of complex structures due to the stair-casing eects (see [13, 23]).
In recent years, the concept of epsilon photography ([27]) has been popular
in digital imaging to optimize the digital camera by recording the scene via
multiple images, captured by an epsilon variation on the camera setting. These
multiple images make a more complete description on the scene, which leads to
an easier conguration for many traditionally challenging image processing tasks
including blind motion deblurring. Most such approaches (e.g. [3, 26, 28, 19])
actively control the capturing process using specic hardwares to obtain multiple
images of the scene such that the blur kernel is easier to infer and the deblurring
is also more robust to noise. Impressive results have been demonstrated by these
approaches. However, the requirement on the active acquisition of input images
limits the wider applications of these techniques in practice.
1.2. Our work
The goal of this paper is to develop a robust numerical algorithm to recover
a high-quality clear image of the scene from multiple motion-blurred images. In
our setting, the input multiple images are passively captured by a commodity
digital camera without any specic hardware. It is noted that the proposed
algorithm could also be applied to removing motion blurring in the video with
little modications. In other words, as the input in our algorithm, there are M
available blurred images with the following relationships:
f
i
= g(h
i
()) p
i
+ n
i
, i = 1, 2, , M,
where h
i
is the spatial geometric transform from the clear image g to the blurred
image f
i
, determined by the camera pose when the i-th image is taken, p
i
is the
blur kernel of the i-th image, and n
i
is the noise.
In this paper, we assume the geometric transforms h
i
s among all images are
estimated using some existing image alignment technique during pre-processing.
However, we emphasize that there is no available image alignment technique
which is capable of accurately aligning blurred images. Therefore, we take the
following f
i
s as the input in our mathematical formulation:
f
i
= g((I + h
i
)()) p
i
+ n
i
, i = 1, 2, , M, (3)
3
where I is the identical operator, p
i
and g are unknowns, h
i
and n
i
are align-
ment errors and image formation noises respectively. In summary, there are two
types of undesired perturbations when taking a multi-image approach to remove
motion blurring. One is the image formation noise n
i
and the other is image
alignment error h
i
. The goal of this paper is to develop a numerical algorithm
robust to both types of perturbations.
In this paper, we begin our study on blind motion deblurring by investigating
how to measure the clearness of the recovered image and on the soundness
of the blur kernel. Our study shows that, given multiple images ( 2), the
sparsity of image under tight frame systems ([12, 30]) is a good measurement
on the clearness of the recovered image, and the sparsity of blur kernels in
a weighted image space is a good measurement on the soundness of the blur
kernel when combined with a smoothness regularization. In particular, it is
shown empirically that the impressive robustness to image alignment error is
achieved by using these two sparsity regularizations on restoring the clear image
and on estimating the blur kernels. In our proposed approach, the sparsity of an
image under a tight frame system is the
1
-norm of its tight frame coecients,
and the sparsity of a blur kernel in a weighted image space is measured by its
weighted
1
-norm.
The rest of the paper is organized as follows. In Section 2, we formalized
the minimization strategy and explained the underlying motivation. In Section
3, we presented the detailed numerical algorithm for solving the proposed min-
imization problem. Section 4 is devoted to the experimental evaluation and the
discussion on future works.
2. Problem formulation and analysis
When taking an image by a digital camera, the image does not represent the
scene in a single instant of time, instead it represents the scene over a period
of time. As the camera does not keep still due to unavoidable camera shake,
the image of the scene must represent an integration of all camera viewpoints
over the period of exposure time, which is determined by the shuttle speed.
The resulted image will look blurry along the relative motion path between the
camera and the scene. As the relative motion between the scene and the cam-
era is generally global; usually we assume that the relative motion is spatially
invariant. Thus the motion blurring is simplied as a convolution process:
f = g p + n, (4)
where p is so-called blur kernel function which represents the relative motion
with normalization, f is the observed blurred image and g is the desired clear
image.
2.1. Benets of using multiple images
The benets of using multiple images to remove motion blurring are two-
folded. First, it is known that non-blind deblurring is an ill-conditioned problem
4
as it is sensitive to noises. The noise sensitivity comes from the fact that re-
versing image blurring process will greatly amplify the high-frequency noise
contribution to the recovered image. Let

() denote the Fourier transform, then
(1) becomes
p() g() =

f() n.
The values of [ p[ are zero or very small at large because the blur kernel p usu-
ally is a low-pass lter. Thus, g() is very sensitive to even small perturbations
at large . Extensive studies have been carried out in past to reduce such noise
sensitivities by imposing some extra regularizations.
However, if we have multiple perfectly aligned images f
i
on the same scene
with dierent motion-blur kernels:
f
i
= p
i
g + n
i
, i = 1, 2, , M. (5)
Because motion-blur kernels are not isotropic, the intersection of all with small
[ p
i
()[ is very likely to be much smaller than the set of with small [ p
i
()[ for
each individual p
i
. Let us examine a simple example of two blurred images: one
is blurred by a horizontal lter p
1
=
1
N
(1, 1, , 1); the other is blurred by a
vertical lter p
2
=
1
N
(1, 1, , 1)
T
. Then we have
p
1
(
x
,
y
) = sinc(
2
N

x
); p
2
(
x
,
y
) = sinc(
2
N

y
).
p
1
has the periodic lines of zero points (
x
=
kN
2
,
y
), k Z and so
does p
2
. However, the combination of p
1
and p
2
only has periodic points
(
x
=
kN
2
,
y
=
kN
2
), k Z, which can greatly improve the condition
of the deconvolution process. In practice, we may not have such an ideal con-
guration. But it is very likely that dierent blurred images have dierent blur
kernels, as the camera motion is random during the capture. Heuristically, it is
a reasonable assumption that the combination of all blur kernels
(
N

i=1
[ p
i
[
2
())
1
2
.
will have much fewer small values in its spectrum than the individual blur kernel
does. Therefore, using multiple images can greatly improve the noise sensitivity
of the deconvolution process.
Secondly, for blind deblurring, the estimation of blur kernels will benet
even more by using multiple images. The main diculty in blind deblurring is
that the problem is under-constrained. There exist innitely many solutions.
For example, the famous degenerate solution to (4)
p := , g := f.
has been a headache for many blind deblurring techniques. Although the de-
generate solution can be avoided by some ad-hoc processes, the inherent ambi-
guities between the kernel and the image still lead to the poor performance of
most available techniques on removing complex motion blurring from images.
5
However, such ambiguities can be signicantly reduced by using multiple
images. Again, let us examine one example. In the case of a single image, let
the blur kernel be decomposed into two parts: p = p
1
p
2
. Then besides the
true solution (p, g), (p
1
, p
2
g) is also a solution to (4). As long as p
2
is a low-
pass lter, even imposing other available physical constraints on images or blur
kernels, e.g.,
p 0;

j,k
p(j, k) = 1; g 0, (6)
will not eliminate such ambiguities. On the contrary, in the case of multiple
images, it is very unlikely that such ambiguities will happen. In order to have
another solution to the system (5) when using multiple images, all N kernels p
i
need to have a common factor p
2
such that
p
i
= p
1i
p
2
.
Considering the fact that the support of each p
i
is a curve in 2D, it is unlikely
that all p
i
will have a non-trivial common factor.
In summary, multiple blurred images on the same scene provide much more
information than a single blurred image does, which leads to a better cong-
uration for recovering a clear image on the scene. But, some new challenging
computational problems also arise when taking a multi-frame approach.
2.2. Challenges of using multiple images
When we take multiple images on the same scene, the camera pose varies
during the capture. In other words, we have multiple observed blurred image
f
i
related to the clear picture g up to a spatial geometrical transform h
i
:
f
i
= g(h
i
()) p
i
+ n
i
, i = 1, 2, , M.
The estimation on h
i
is known as the image registration problem, which has
been extensively studied in past (see [34] for more details). However, most image
registration techniques need to assume a simple model (e.g. ane transform)
on the geometrical transform, which may not approximate the true geometrical
transform well when the 3D structure of the scene is complicated. Furthermore,
when images are seriously blurred, the appearances of the same scene blurred
by dierent blur kernels are very dierent from each other. Currently there
does not exist an alignment technique which is capable of accurately aligning
seriously blurred images. Therefore, we have a new perturbation source when
using multiple images: image alignment error.
It is observed in [19] that alignment errors will seriously impact the perfor-
mance of multi-image blind deblurring. In [19], a simple experiment is carried
out to illustrate the high sensitivity of blind deblurring to alignment errors when
estimating blur kernels. The experiment considered a simplied conguration
by assuming knowing the clear image and the blurred image up to a small align-
ment perturbation. The only unknown is the blur kernel. Thus, the inputs of
the experiment are a clear image g shown in Fig. 1 (a) and its blurred version
6
(a) clear image (b) blurred image
(c)(1.4

, 0.98) (d)(0.7

, 0.99) (e) (0

, 1.00) (f) (0.7

, 0.99) (g) (1.4

, 0.98).
Figure 1: (b) is the synthesized blurred version of the clear image (a) by using the blur kernel
shown in the top right of (b). (c)-(f) are the blur kernels estimated based on the blurred
image (a) and a number of transformed clear image (b) with dierent rotation and scale. Two
numbers in brackets under each kernel are the rotation angle and the scale value (, s). All
images are taken from [19].
f shown in Fig. 1 (b) up to an alignment perturbation. The corresponding blur
kernel p is shown in the top right corner of the blurred image. The alignment
perturbations are simulated by applying a similarity transform on Fig. 1 (b)
with various pairs of rotations and scales (, s) and with the same translation
(t
x
, t
y
):
h :
_
x
y
_
s
_
cos sin
sin cos
__
x
y
_
+
_
t
x
t
y
_
. (7)
Then the blur kernel is estimated based on the clear image g in Fig. 1 (a) and
the blurred image Fig. 1 (b) with a geometrical perturbation h dened in (7).
In other words, the estimated blur kernel p

is based on the following perturbed


version of the original equation:
p

g(h()) = f = p g(). (8)


In the experiment done by [19], p

is estimated by solving (8) using a least


squares method with Tikhonov regularizations. It is noted that the multi-image
blind blurring is not sensitive to small translation errors, as the translation
between two images only results in a spatial shift of the estimated kernel. Thus,
the translation error is xed as a constant in the experiment.
The estimated blur kernels p

is given in Fig 1 (c)(g) with respect to small


alignment perturbations of the form (7) for various s and . The results clearly
showed that the blur kernel is very sensitive to even a small alignment error
in the simplied case where the true clear image is available. In practice, the
problem is much more ill-conditioned as we do not have the clear image in hand.
This experiment clearly indicated the importance of the robustness to alignment
errors when developing multi-image blind motion deblurring techniques.
7
3. Formulation for blind motion deblurring with sparsity regulariza-
tions
3.1. Outline of our proposed algorithm
Given M blurred images f
i
, i = 1, 2, , M satisfying the relationship (3):
f
i
= g((I + h
i
)()) p
i
+ n
i
, i = 1, 2, , M,
we take a regularization-based approach to solve the blind motion deblurring
problem, which requires the simultaneous estimations on both the clear image g
and M blur kernels p
i
, i = 1, , M. It is well known that the regularization-
based blind deconvolution approaches usually result in solving a challenging
non-convex minimization problem. In our case, the number of the unknowns is
up to the order of millions. The most commonly used approach is an alternative
iteration scheme; see [9] for instance. The alternative iteration scheme can be
described as following: let g
(0)
be the initial guess on the clear image.
Algorithm 1 Outline of the alternative iterations
For k = 0, 1, ,
1. given the clear image g
(k)
, compute the blur kernels p
(k+1)
i
, i =
1, 2, . . . , M.
2. given the blur kernels p
(k+1)
i
, i = 1, 2, . . . , M, compute the clear image
g
(k+1)
;
There are two steps in Algorithm 1. Step 2 is a non-blind image deblurring
problem, which has been studied extensively in the literature; see, for instances,
[1, 22, 8, 7, 18, 5]. However, there is one more error source in Step 2 than
the traditional non-blind deblurring problem does, that is, the error in the in-
termediate blur kernel p
(k+1)
i
used for deblurring. Inspired by recent non-blind
deblurring techniques which are based on the sparse approximation to the image
under certain tight frame systems ([7, 4]), we also use the sparsity constraint
on the clear image g under tight frame systems to regularize the non-blind de-
blurring. And we use a modied version of so-called linearize Bregman iteration
(See [24, 32, 31, 16, 20, 11, 25, 4, 5, 6, 15]) to achieve impressive robustness
to image noises, alignment errors, and, more importantly, the perturbations on
the given intermediate blur kernels. In our implementation, we choose the tight
framelet system constructed in [12, 30] as the choice of the tight frame system
representing the clear image g.
For Step 1, it is observed in Fig. 1 that the alignment error will lead to a
false estimation on the motion blur kernel. The support of the false kernel tends
to be much larger than that of the true blur kernel. Based on this observation,
we propose to overcome the sensitivity of estimating blur kernels to alignment
errors by exploring the sparsity constraint on the motion blur kernel in its
8
spatial domain. Similarly, we also use a modied version of linearized Bregman
iteration to solve the resulting minimization problem. Before we present the
detailed algorithm, we give a brief introduction to the framelet system used in
our method in the remaining of the section.
3.2. Tight framelet system and image representation
A countable set X L
2
(R) is called a tight frame of L
2
(R) if
f =

hX
f, h)h, f L
2
(R), (9)
where , ) is the inner product of L
2
(R). An orthonormal basis is a tight frame,
hence a tight frame is a generalization of an orthonormal basis. However, tight
frames sacrice the orthonormality and the linear independence of the system
in order to get more exibilities. Tight frames can be redundant.
For given :=
1
, . . . ,
r
L
2
(R), the ane (or wavelet) system is
dened by the collection of the dilations and the shifts of as
X() :=
,j,k
: 1 r; j, k Z with
,j,k
:= 2
j/2

(2
j
k). (10)
When X() forms a tight frame of L
2
(R), it is called a tight wavelet frame, and

, = 1, . . . , r, are called the (tight) framelets.


To construct a set of framelets, usually, it starts with a compactly supported
renable function L
2
(R) (a scaling function) with a renement mask

satisfying

(2) =

.
Here

is the Fourier transform of , and

is a trigonometric polynomial with

(0) = 1, i.e., a renement mask of a renable function must be a lowpass


lter. For a given compactly supported renable function, the construction of
tight framelet systems is to nd a nite set that can be represented in the
Fourier domain as

(2) =

for some 2-periodic

. The unitary extension principle (UEP) of [30] says


that X() in (10) generated by forms a tight frame in L
2
(R) provided that
the masks

and

satisfy:

()

( + ) +

()

( + ) =
,0
, = 0, 1 (11)
for almost all in R.

must correspond to a low-pass lter and

must correspond to highpass lters. The sequences of Fourier coecients of

, as well as

itself, are called framelet masks. In our implementation, we


adopt piece-wise linear B-spline framelet constructed in [12, 30]. The renement
mask is

() = cos
2
(

2
), whose corresponding lowpass lter is h
0
=
1
4
[1, 2, 1].
9
Figure 2: Piecewise linear framelets.


1

2
Two framelets are
1
=

2i
2
sin() and
2
= sin
2
(

2
), whose corresponding
highpass lters are
h
1
=

2
4
[1, 0, 1], h
2
=
1
4
[1, 2, 1].
The associated renable function and framelets are given in Fig. 2. With a 1D
framelet system for L
2
(R), the 2D framelet system for L
2
(R
2
) can be easily
constructed by the tensor product of 1D framelet.
In the discrete case, an nn image f is considered as the coecients f (i) =
f, ( i)), i Z
2
up to a dilation, where is the renable function associated
with the framelet system, and , ) is the inner product in L
2
(R
2
). The L-level
discrete framelet decomposition of f is then the coecients f, 2
L/2
(2
L

j)), j Z
2
at a prescribed coarsest level L, and the framelet coecients
f, 2
l/2

i
(2
l
j)), j Z
2
, 1 i r
2
1,
for 0 l L.
If we denote f as a vector f R
N
, N = n
2
by concatenating all columns
of the image, the discrete framelet decomposition of f can be described by the
vector Af , where A a K N matrix. By the UEP (11), A
T
A = I, thus the
rows of A form a tight frame system in R
N
. In other words, the framelet
decomposition operator A can be viewed as a tight frame system in R
N
as its
rows form a tight frame in R
N
such that the perfect reconstruction formula
x =

yA
x, y)y holds for all x R
N
. Unlike the orthonormal case, we
emphasize that AA
T
,= I in general. In our implementation, we use a multi-
level tight framelet decomposition without down-sampling under the Neumann
(symmetric) boundary condition. The detailed description can be found in [7].
4. Numerical algorithm
This section is devoted to the detailed numerical algorithm of our blind
deconvolution algorithm outlined in Algorithm 1. Both steps in Algorithm 1
are based on the linearized Bregman iteration. The Bregman iteration was rst
introduced for non-dierentiable TV-energy in [24], and then was successfully
applied to
1
-norm minimization in compressed sensing in [32] and to wavelet
based denoising in [31]. The Bregman iteration was also used in TV-based blind
10
deconvolution in [16, 20]. To further improve the performance of the Bregman
iteration, a linearized Bregman iteration was invented in [11]; see also [32].
More details and an improvement called kicking of the linearized Bregman
iteration was described in [25], and a rigorous theory was given in [4, 6]. The
linearized Bregman iteration for frame-based image deblurring was proposed
in [5]. Recently, a new type of iteration based on Bregman distance, called
split Bregman iteration, was introduced in [15], which extended the utility of
Bregman iteration and linearized Bregman iteration to minimizations of more
general
1
-based regularizations including total variation, Besov norms and sums
of such things.
Consider M blurred images f
i
R
N
, i = 1, , M. We assume that the size
of the blur kernel is no larger than nn. Let p
i
R
N
denote the blurred image
p
i
after column concatenation. Let [ ]

denote the convolution operator of p and


f after concatenating operation:
p f = [p]

f = [f ]

p.
Let u = Ag denote the framelet coecients vector of the clear image g.
4.1. Method for Step 2 in Algorithm 1
In Step 2 of Algorithm 1, at the k-th iteration, we need to nd a clear image
g
(k+1)
given the blur kernels p
(k+1)
i
, i = 1, 2, M.
In the initial stages, since the kernel is not in good shape, it is not necessary
to nd an accurate g
(k+1)
. We simply use a least squares deblurring algorithm,
i.e., solve
min
g
1
2
M

i=1
|[p
(k+1)
i
]

g f
i
|
2
2
+ |g|
2
2
. (12)
This can be done eciently by FFTs.
In the nal stages, the kernel becomes in good shape, so we need an accurate
solution of the clear image. For this, we solve the image deblurring problem in
the tight framelet domain. Let u be the tight framelet coecients of the clear
image g
(k+1)
, i.e., g
k+1
= A
T
u. Our strategy of recovering the clear image
g
(k+1)
is to nd a sparse solution u in the tight framelet domain among all
solutions with reasonable constraints.
Temporarily, we ignore the mis-alignment error and assume that the blur
kernels are accurate enough, such that there exist solutions for the equations
[p
(k+1)
i
]

A
T
u = f
i
, i = 1, 2, , M.
To seek a sparse set of coecient u, we need to minimize its
1
-norm |u|
1
.
Thus, we have to solve
min
u
|u|
1
subject to [p
(k+1)
i
]

(A
T
u) = f
i
, i = 1, 2, , M.
(13)
11
The linearized Bregman iteration is a very ecient tool to solve the above
minimization problem. Given x
(0)
= v
(0)
= 0, the linearized Bregman iteration
generates a sequence of x
(l)
and v
(l)
as follows
_
v
(l+1)
= v
(l)

M
i=1
AP
(k+1)
i
[p
(k+1)
i
]
T

_
[p
(k+1)
i
]

(A
T
x
(l)
) f
i
_
,
x
(l+1)
=
1
2
T
2
(v
l+1
).
(14)
Here T
2
is the soft-thresholding operator dened by
T
2
(v) = [t
2
(v
1
), t
2
(v
1
), . . .], with t
2
(v
i
) = sign(v
i
) max([v
i
[
2
, 0),
and P
(k)
i
is a preconditioning matrix to accelerate the convergence of the itera-
tion. Usually, we choose
P
(k+1)
i
=
_
[p
(k+1)
i
]
T

[p
(k+1)
i
]

+
2

_
1
, (15)
where is the discrete Laplacian. Notice that the operators P
(k+1)
i
and [p
(k+1)
i
]
T

are commutable since they are all entrywise multiplications in the Fourier do-
main.
The basic idea of the linearized Bregman iteration (14) for nding a sparse
solution is as follows. Two steps are involved in the linearized Bregman iteration.
The rst step is to nd an approximate solution (a least squares solution in our
case) to the residual equation of the constraint in (13) to update the data.
However, the updated data may not be sparse. Therefore, the second step, a
soft-thresholding operator, is applied to obtain a sparse framelet coecients set.
The procedure is repeated and it converges to a sparse solution in the framelet
domain. The algorithm is ecient and robust to noises as analyzed by [5] and
we also have the following convergence results from [5]. See [4, 5, 6, 32] for a
more detailed analysis.
Proposition 1. Assume that there exists at least one solution of [p
(k+1)
i
]

(A
T
u) =
f
i
, i = 1, 2, . . . , M. Then, there exists a constant c such that, for all

2
(0, c), the sequence x
(l)
generated by (14) converges to the unique solu-
tion of
min
u
|u|
1
+
1
222
|u|
2
2
subject to [p
(k+1)
i
]

(A
T
u) = f
i
, i = 1, 2, , M.
(16)
Therefore, if we choose a suciently large thresholding parameter , then the
iteration (14) converges to a solution of (13).
During the iterations of Algorithm 1, the intermediate results of the blur
kernels are not accurate until the last few iterations. More importantly, there are
alignment errors among the observed images. Thus, to obtain a good deblurred
image, one can still use (13), but need to stop it early when the error of the
constraint in (13) satises
|[p
(k+1)
i
]

(A
T
u) f
i
|
2
2

2
2
, i = 1, 2, M, (17)
12
where
2
2
is an estimation of the variance of the image noises, the inaccuracy of
the blur kernels, and the image alignment errors. The main reason is that the
Bregman iteration has a good property: in the sense that the Bregman distance
decreases, x
(l)
approaches the tight frame coecients of the true image until
the residual in the iteration drops below the variance of the errors, as shown
theoretically in [24, 32]. Furthermore, (14) is very robust to image noises and
alignment errors in f
i
([5]).
In summary, in Step 2 of Algorithm 1, we use the linearized Bregman itera-
tion (14) with the stopping criteria (17) to get a clear image. Usually, it takes
only tens of iterations for (14) to get an image of satisfactory visual quality.
4.2. Method for Step 1 in Algorithm 1
In Step 1 of Algorithm 1, given the intermediate clear image g
(k)
, we want
to compute the blur kernels p
(k+1)
i
, i = 1, 2, M. As shown in (2), a true
motion blur kernel can be approximated well by a smooth function with the
support of a continuous curve. It is observed that there are two essential prop-
erties of a sound motion blur kernel: one is the overall smoothness of the blur
kernel; the other is its curvy support which implies its sparsity in spatial do-
main. Inspired by this observation, we model the motion blur kernel as a sparse
solution in spatial domain with strong smoothness on its support. Thus, the
proposed algorithm is to nd the sparse blur kernels p
(k+1)
i
, i = 1, 2, . . . , M
in spatial domain subject to certain smoothness constraints, which results in an

1
norm minimization problem. In order to further improve the accuracy and
the eciency of estimating the blur kernel, we use a weighted
1
norm instead
the ordinary one.
Same as our previous discussions on Step 1, temporarily, we ignore the image
alignment errors and assume that the clear image are accurate enough, such that
there exist solutions for the equations
[g
(k)
]

p
i
= f
i
, i = 1, 2, , M.
Since a weighted
1
-norm is minimized, we have to solve
argmin
pi
|W
i
p
i
|
1
, subject to [g
(k)
]

p
i
= f
i
, i = 1, 2, , M, (18)
where W
i
is the diagonal weighting matrix. Again, the linearized Bregman
iteration can be applied to solve (18). The iteration is as follows, starting from
r
(0)
i
= q
(0)
i
= 0,
_
r
(l+1)
i
= r
(l)
i
Q
(k)
[g
(k)
]
T

_
[g
(k)
]

q
(l)
i
f
i
_
,
q
(l+1)
i
=
1
1
W
i
T
1
_
(W
i
)
1
r
l+1
i
_
.
(19)
Here T
1
is the soft-thresholding operator, Q
(k)
is a preconditioner matrix:
Q
(k)
=
_
[g
(k)
]
T

[g
(k)
]

+
1

_
1
,
13
where is the discrete Laplacian. Again, Q
(k)
and [g
(k)
]
T

are commutable.
Similar to Proposition 1, (19) gives a sparse solution and we have the conver-
gence of (19).
Proposition 2. Assume that there exists at least one solution of [g
(k)
]

p
i
=
f
i
, i = 1, 2, . . . , M. Then, there exists a constant c such that, for all
1
(0, c),
the sequence q
(l)
i
generated by (19) converges to the unique solution of
min
pi
|W
i
p
i
|
1
+
1
211
|p
i
|
2
2
subject to [g
(k)
]

p
i
= f
i
, i = 1, 2, . . . , M.
(20)
Therefore, if we choose a suciently large thresholding parameter
1
, then the
iteration (19) converges to a solution of (18).
During the iterations of Algorithm 1, the intermediate result of the clear
image is not accurate and there are alignment errors among the observed images.
Similar to the method for Step 2, we still use (19), but stop it early when the
error of the constraint satises
|[g
(k)
]

q
l
i
f
i
|
2
2

2
1
, (21)
where
2
1
is an estimation of the variance of the errors caused by the noise, the
inaccuracy of the clear image and the alignment errors. Therefore, q
(l)
i
generated
by (19), with a large
1
and stopping criteria (21), gives a good estimation of
the blur kernel. From our empirical observations, only a couple of iterations
already yield a satisfactory estimation on the blur kernel.
From Proposition 2, (19) minimizes the
2
norm of the blur kernel among
all the minimal weighted
1
norm solutions (See [4, 5, 6] for details). We would
like to explain more on why (19) is likely to yield a blur kernel which is a
smooth function with a curvy support. In the rst step of (19), the opera-
tion Q
(k)
[g
(k)
]
T

_
[g
(k)
]

q
(l)
i
f
i
_
essentially yields the solution of the following
minimization:
min
pi
1
2
|[g
(k)
]

p
i
t
(k)
i
|
2
+
1
|p
i
|
2
,
where t
(k)
i
is the residual of the i-th observed image at the k-th step satisfying
t
(k)
i
= [g
(k)
]

q
(l)
i
f
i
.
In other words, a Tikhonov regularization with the penalty |p
i
|
2
is applied
in the preconditioning step. Thus, the smoothness of the estimated blur kernel
is imposed to better constrain the smoothness of the blur kernel p
i
. The side
eect is that the resulting blur kernel tends to spread out of the true support due
to the smoothing. Therefore, the second step of (19) comes in to remove this
side eect by nding a sparse solution with minimal weighted
1
norm from the
previous one, which is done by applying a soft-thresholding operator as shown
in (19). The support of the resulting sparse solution is then shrunken and likely
to approximate the true support better than the previous one.
14
If only using the Tikhonov regularization, the resulting blur kernel will be
a smooth function with the support of a large region; if only using the sparsity
constraint, the resulting blur kernel will be very likely to converge to the de-
generate case of only a few isolated points. By balancing the smoothness of the
kernel using a Tikhonov regularization and the sparsity of the kernel in spatial
domain using a weighted
1
norm, the sequence generated from (19) will con-
verge as Proposition 2 proved. And the resulting solution will be close to the
ideal motion blur kernel model, that is, a smooth function with the support of
a continuous curve.
How to choose an appropriate W
i
is dependent on the support of the true
blur kernel p

i
. A good example of W
i
is
W
i
(m, m) =
_
1
|p

i
(m)|
, if p

i
(m) ,= 0
, otherwise;
and W
i
(m, n) = 0, if m ,= n,
for j, k = 1, , N. Unfortunately, It is impossible to construct such a W
i
without knowing the blur kernel p
i
. In our approach, we take a simple algorithm
which updates the weighting matrix W
i
s iteratively. That is, during the k-th
iteration for the blur kernel estimation, we dene the weighting matrix W
(k)
i
as
such
W
(k)
i
(m, n) =
_
([([a]

p
(k)
i
)(m)[ + )
1
, if m = n
0, otherwise,
based on the value p
(k)
i
obtained from the last iteration. Here [a]

is the con-
volution with a local average kernel. The weighting matrix W
(k)
i
will be used
in the next iteration. The parameter is to avoid numerical instability. It is
observed empirically that such a weighting
1
norm can greatly speed up the
algorithm.
4.3. The Complete Algorithm
Due to all types of noises and errors, the numerical algorithms do not always
yield a solution which satises the physical rules of the image in the digital
world. In order to obtain a physical solution, we also impose the following
physical conditions:
_
p
i
0,

j
p
i
(j) = 1, i = 1, 2, , M.
g = A
t
u 0.
(22)
The constraints (22) say that all pixel values of the recovered image has to
be non-negative, and the estimation kernel should also be non-negative and its
summation should be 1. It is noted that the physical constraint on the kernel
p
i
partially explains the reason why the regular
1
norm is not a good sparsity
measurement on the kernel p
i
because the
1
norm of the kernel p
i
is always 1 if
p
i
satises the constraints (22). Combining all together, the complete algorithm
for our blind deconvolution is described in Algorithm 2.
15
Algorithm 2 Complete algorithm for blind motion deblurring
1. Let W
(0)
i
and g
(0)
be initial guess.
2. Iterate on k until convergence.
(a) Fixing the clear image g
(k)
, iterate as (19) until (21) is satised.
Then impose p
(k+1/2)
i
= q
()
i
, where is the minimal l such that (21)
is satised.
(b) Get the blur kernels by
p
(k+1)
i
= T(p
k+1/2
i
),
where T is the projection operator, with an explicit expression, onto
the set p : p(j) 0,

j
p(j) = 1.
(c) Adjust the weightings W
i
by
W
(k+1)
i
(j, ) =
_
1
|([a]p
(k+1)
i
)(j)|+
, if j =
0, otherwise.
(d) Fixing the blur kernels p
(k+1)
i
, i = 1, 2, . . . , M, if k K, get g
k+1/2
by solving (12). Otherwise, iterate as (14) until (17) is satised; then
impose u
(k+1)
= x
()
, where is the minimal l such that (17) is
satised; set g
(k+1/2)
= A
T
u
(k+1)
.
(e) Get the clear image g
(k+1)
by
g
(k+1)
(j) =
_
g
(k+1/2)
(j), if g
(k+1/2)
(j) 0;
0, otherwise.
(f) k = k + 1.
16
(a) (b) (c)
Figure 3: (a) The original clear image. (b) One blurred image. (c) Another blurred image.
The corresponding blur kernels are shown on the top left of the images respectively.
5. Experimental Evaluation and discussion
In our implementation, the initial diagonal elements of W
(0)
i
is set as 1 on
those points falling in the center square of the image with size
n
2

n
2
and as
1

otherwise, and the initial image g


(0)
is one of the input images f
i
for some i. Also,
the maximum iteration number of Algorithm 2 is set to 100. The number K in
Step 2(d) of Algorithm 2 is set about 2/3 of the maximum iteration number. The
other parameters in Algorithm 2 are chosen empirically. All our experiments
are done by running Matlab codes of our algorithm on a windows PC with an
Intel 2G Hz CPU. Each iteration in Algorithm 1 takes roughly 18 seconds for all
channels of the two input blurring color images with the resolution 1280 1024
pixels.
5.1. Simulated images
In the rst experiment, we would like to see how robust the estimation of
motion blur kernels in our proposed method is to the alignment error. The
images used in this experiment is synthesized as follows. First, two blurred
images (Fig. 3 (b) and (c)) are generated by applying two dierent blur ker-
nels on the clear image (Fig. 3 (a)) respectively. Then the alignment error is
added to the second blurred image (Fig. 3 (c)) by applying a spatial transform
of (7) on the image with dierent rotations and scales (, s). The translation
(t
x
, t
y
) always keeps the same (5 pixels shift along x-axis and 5 pixels shift
along y-axis). Our proposed method is then applied to each pair of the blurred
images in (Fig. 3(b) and Fig. 3(c)) with spatial distortions to recover the clear
image and the blur kernels. Fig. 4 showed the intermediate results during the
iterations of Algorithm 2 to illustrate the convergence behaviors of our algo-
rithm. Fig. 5 (b) showed the estimated motion blur kernels from our method
for dierent alignment errors.
It is clear that our proposed method can perfectly estimate the compli-
cated blur kernels when the alignment is perfect. When there exist modest
mis-alignments between two images, our method still can accurately estimate
two blur kernels. The results shown in Fig. 5 (b) demonstrated that our method
17
Iter. 10 Iter. 30 Iter. 50 Iter. 70 Iter. 90
(a)
(b)
Figure 4: Five images in (a) are the recovered images during k-th iteration when applying our
proposed algorithm on two blurred images in Fig. 3 (a)(b). The corresponding estimated
blur kernels are shown in (b).
is capable of nding complicated blur kernels and is robust to the modest align-
ment errors. For the comparison, we also estimates the blur kernels by least
squares minimization with Tikhonov regularizations. Fig. 5 (a) showed that
the standard approach cannot identify the motion blur kernel even there is
no alignment error. Fig. 6 (a)-(c) showed a number of deblurred images us-
ing our proposed method under dierent alignment errors. As the comparison,
Fig. 6 showed the deblurred image by least squares minimization method with
Tikhonov regularization when there exist no alignment errors.
In the second experiment, we would like to evaluate the robustness of our
method to image noises. All blurred images in this experiment are generated by
applying two blur kernels on the original image, subsequently contaminated by
zero mean white noise with dierent noise levels. Thirty two random samples
are generated for each noise level. The noise level is measure by so-called SNR
(signal to noise ratio) of the noised image

I to the clean image I dened as
SNR(

I) = 20 log
10
|I|
2
/|I

I|
2
.
Fig. 7 showed that the estimation of the blur kernel by our method is also very
robust to image noises.
5.2. Real images
We also tested our method on real images taken by a handheld commodity
digital camera with small hand trembles. All images are rst automatically
aligned by using the conventional image alignment method from [2] before ap-
plying our method.
Fig. 8 (a)-(b) and Fig. 9 (a)-(b) showed two blurred tele-photos on two
indoor objects due to hand trembles. As the comparison, the recovered images
from Algorithm 2 are compared against the results from the state-of-art blind
motion deblurring technique ([14]) which utilizes the statistical prior on the
18
(0

, 1.000) (1

, 0.990) (2

, 0.983) (3

, 0.965) (4

, 0.948)
(a)
(0

, 1.000) (1

, 0.990) (2

, 0.983) (3

, 0.965) (4

, 0.948)
(b)
Figure 5: (a) The estimated kernels of the images in Fig. 3(b)-(c) using least square min-
imization with Tikhonov regularization. (b) The estimated kernels of the images in Fig. 3
(b)-(c) using our method. Two numbers in brackets under each pair of estimated kernels are
the rotation angle parameter and the scale parameter s for the alignment error of the form
(7).
(0

, 1.000) (1

, 0.990) (2

, 0.983) (0

, 1.000)
(a) (b) (c) (d)
Figure 6: (a)(c) are the deblurred images using the kernels from Algorithm 1. (d) is the
deblurred image using the estimated kernel from the least squares method with Tikhonov
regularization. Two numbers in brackets under each deblurred image are the rotation angle
parameter and the scale parameter of (7).
38 34 30 26
(a) (b)
Figure 7: (a) The estimated kernels of Fig. 3(b) and (c) under various noise settings. The
horizontal vector on the top is the SNR of the noisy images. (b) The left image is the noisy
blurred image with SNR = 26dB and the right image is the deblurred image.
19
image gradients to derive the motion blur kernel. It is seen that the restored
image by Algorithm 2 shown in Fig. 8 (d) and Fig. 9 (d) are very clear with
little artifacts. Obviously, they have much better visual quality than the images
restored by the method from [14] which are shown in Fig. 8 (c) and Fig. 9 (c).
We also tested our method on out-door scenes. The blurred images on out-
door scenes usually tend to be more dicult to deblur as there are multiple layers
of blurring due to more complicated 3D structures, e.g., out-of-focus blurring
and moving objects. Also, the complex image structure of typical outdoor scenes
makes the deblurring process more challenging. Fig. 8 (a)-(b) and Fig. 9 (a)-
(b) showed two blurred image pairs on two out-door scenes. We compared
the results from Algorithm 2 to the results from more traditional cepstrum-
based approach ([17]). Obviously, the results from Algorithm 2 are much better
than those using the method from [17]. However, the restored images shown
in Fig. 10 (d) and Fig. 11 (d) are less impressive than the previous results of
in-door images. One reason is that the framelet coecients of images with rich
textures are not as sparse as those of images with less textures, which results
in less robustness of our deblurring algorithm to image noises. Also, there are
more noticable artifacts in Fig. 10 (d) than in Fig. 11 (d). The reason could be
that the actual blurring for the case of Fig. 10 is the mixture of multiple blurring
process and our model only focuses on the motion blurring. One evidence is that
the estimated blur kernels shown in Fig. 10 (e) are not in the form of typical
motion-blur kernels. Another possible reason could be that the blurring kernel
of Fig. 10 is not spatial invariant due to the wind blowing the leaves during the
camera exposure. This can be seen from the fact that the artifacts in Fig. 10 (e)
have dierent directions for dierent leaves.
5.3. Conclusion and future work
Using multiple images not only improves the condition on deconvolution
process, but also provides more information to help the identication of com-
plicated motion blurring. However, the benets of using multiple images can
not be easily materialized by the standard approaches as the unavoidable image
alignment errors could eliminate all the advantages of using multiple images.
In this paper, we proposed an approach to recover the high-quality clear im-
age by using multiple images to accurately identify motion blur kernels. By
using the sparsity constraints on the images and on the blur kernels in suitable
domains, the proposed approach is robust to the image formation noise and
more importantly robust to the image alignment errors. Furthermore, based
on the linearized Bregman iteration technique, we developed a fast approxi-
mate algorithm to nd a good approximate solution to the resulting large-scale
minimization problem very eciently.
Our proposed method does not require a prior parametric model on the mo-
tion blur kernel, and does not require accurate image alignment among frames.
These two properties greatly extend the applicability of motion deblurring on
general video sequences in practice. In future, we would like to investigate the
localization of our algorithm on spatial-variant motion blurring such as deblur-
ring fast-moving objects in the image. Also, we are interested in investigating
20
(a) (b)
(c) (d) (e)
Figure 8: (a)(b): two blurred images; (c): the recovered image using the method in [14]; (d):
the deblurred image using Algorithm 2; (e): the two blur kernels estimated by Algorithm 2
w.r.t. (a) and (b).
(a) (b)
(c) (d) (e)
Figure 9: (a)(b): two blurred images; (c): the recovered image using the method in [14]; (d):
the deblurred image using Algorithm 2; (e): the two blur kernels estimated by Algorithm 2
w.r.t. (a) and (b).
21
(a) (b)
(c) (d) (e)
Figure 10: (a)(b): two blurred images; (c): the recovered image using the newest cepstral
method ([17]); (d): the deblurred image using Algorithm 2; (e): the two blur kernels estimated
by Algorithm 2 w.r.t. (a) and (b).
(a) (b)
(c) (d) (e)
Figure 11: (a)(b): two blurred images; (c): the recovered image using the newest cepstral
method ([17]); (d): the deblurred image using Algorithm 2; (e): the two blur kernels estimated
by Algorithm 2 w.r.t. (a) and (b).
22
how to incorporate the image alignment of blurred image into the proposed
minimization to achieve even better performance.
References
[1] H. C. Andrews, B. R. Hunt, Digital image restoration, Prentice-Hall, En-
glewood Clis, NJ, 1977.
[2] J. Bergen, P. Anandan, K. Hanna, R. Hingorani, Hierarchical model-based
motion estimation, in: ECCV, 1992.
[3] M. Ben-Ezra, S. K. Nayar, Motion-based motion deblurring, IEEE Trans.
Pattern Aanalysis and Machine Intelligence 26 (6) (2004) 689698.
[4] J.-F. Cai, S. Osher, Z. Shen, Linearized Bregman iterations for compressed
sensing, Mathematics of Computation 78 (2009) 15151536.
[5] J.-F. Cai, S. Osher, Z. Shen, Linearized Bregman iterations for frame-based
image deblurring, SIAM J. Imaging Sci. 2 (1) (2009) 226252.
[6] J.-F. Cai, S. Osher, Z. Shen, Convergence of the linearized Bregman it-
eration for
1
-norm minimization, Mathematics of Computation 78 (2009)
21272136.
[7] A. Chai, Z. Shen, Deconvlolution: A wavelet frame approach, Numer.
Math. 106 (2007) 529587.
[8] T. F. Chan, J. Shen, Image processing and analysis, Variational, PDE,
wavelet, and stochastic methods, Society for Industrial and Applied Math-
ematics (SIAM), Philadelphia, PA, 2005.
[9] T. F. Chan, C. K. Wong, Total variation blind deconvolution, IEEE Tran.
Image Processing 7 (3) (1998) 370375.
[10] M. M. Chang, A. M. Tekalp, A. T. Erdem, Blur identication using the
bispectrum, IEEE transactions on signal processing 39 (1991) 23232325.
[11] J. Darbon, S. Osher, Fast discrete optimization for sparse approximations
and deconvolutions, in: preprint, 2007.
[12] I. Daubechies, B. Han, A. Ron, Z. Shen, Framelets: MRA-based construc-
tions of wavelet frames, Appl. Comput. Harmon. Anal. 14 (2003) 146.
[13] D. C. Dobson, F. Santosa, Recovery of blocky images from noisy and
blurred data, SIAM J. Appl. Math. 56 (1996) 11811198.
[14] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, W. T. Freeman, Re-
moving camera shake from a single photograph, SIGGRAPH, 25 (2006)
783794.
23
[15] T. Goldstein, S. Osher, The split Bregman algorithm for l1 regularized
problems, UCLA CAM Reports (08-29).
[16] L. He, A. Marquina, S. J. Osher, Blind deconvolution using TV regular-
ization and Bregman iteration, Int. J. Imaging Syst. Technol. 15 (2005)
7483.
[17] H. Ji, C. Q. Liu, Motion blur identication from image gradients, in: Pro-
ceeding of IEEE Computer Society conference on computer vision and pat-
tern recognition, (2008) 18.
[18] Y. Lou, X. Zhang, S. Osher, A. Bertozzi, Image recovery via nonlocal
operators, UCLA CAM Reports (08-35).
[19] Y. Lu, J. Sun, Q. L., H. Shum, Blurred/non-blurred image alignment using
kernel sparseness prior, IEEE Int. Conf. on Computer Vision (2007).
[20] A. Marquina, Inverse scale space methods for blind deconvolution, UCLA
CAM Reports (06-36).
[21] C. Mayntz, T. Aach, Blur identication using a spectral inertia tensor and
spectral zeros, IEEE transacions on image processing 2 (1999) 885559.
[22] M. K. Ng, R. H. Chan, W. Tang, A fast algorithm for deblurring models
with neumann boundary condition, SIAM J. Sci. Comput. 21 (3) (2000)
851866.
[23] M. Nikolova, Local strong homogeneity of a regularized estimator, SIAM
J. Appl. Math. 61 (2000) 633658.
[24] S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, An iterative regulariza-
tion method for total variation-based image restoration, Multiscale Model.
Simul. 4 (2005) 460489.
[25] S. Osher, Y. Mao, B. Dong, W. Yin, Fast linearized Bregman iteration for
compressed sensing and sparse denoising, UCLA CAM Reprots (08-37).
[26] A. Rav-Acha, S. Peleg, Two motion blurred images are better than one,
Pattern Recognition Letters 26 (2005) 311317.
[27] R. Raskar, J. Tubmlin, A. Mohan, A. Agrawal, Y. Li, Computational pho-
tography, EUROGRAPHICS (2006).
[28] R. Raskar, A. Agrawal, J. Tumblin, Coded exposure photography: Motion
deblurring via uttered shutter, SIGGRAPH 25 (2006) 795804.
[29] I. M. Rekleitis, Steerable lters and cepstral analysis for optical ow cal-
culation from a single blurred image, Vision Interface 1 (1996) 159166.
[30] A. Ron, Z. Shen, Ane system in l
2
(r
d
): the analysis of the analysis oper-
ator, Journal of Functional Analysis 148 (1997) 408447.
24
[31] J. Xu, S. J. Osher, Iterative regularization and nonlinear inverse scale space
applied to wavelet-based denoising, IEEE Trans. Image Process. 16 (2007)
534544.
[32] W. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for

1
-minimization with applications to compressed sensing, SIAM J. Imaging
Sci. 1 (2008) 143168.
[33] Y. Yitzhaky, I. Mor, A. Lantzman, N. S. Kopeika, Direct method for
restoration of motion-blurred images, J. Opt. Soc. Am. 15 (6) (1998) 1512
1519.
[34] B. Zitova, J. Flusser, Image registration methods: a survey, Image and
Vision Computing 21 (11) (2003) 9771000.
25

You might also like