Chapter 2
A Comprehensive Survey on Image
Binarization Techniques
Abstract A detailed survey about the principles of image binarization techniques
is introduced in this chapter. A comprehensive review is given. A number of clas-
sical methodologies together with the recent works are considered for comparison
and study of the concept of binarization for both document and graphic images.
Keywords Review of binarization methods • Global binarization • Image
thresholding • Adaptive local binarization
2.1 Foundations of Image Binarization Techniques
A number of methodologies have been proposed by several researchers on image
segmentation using binarization and its applications toward moving object detec-
tion and human gait recognition. This section presents a review of the classical
methodologies found in the literature. Over past four decades, several researchers
have proposed a variety of thresholding techniques for binarization of document
images [1–7] as well as graphic images [8, 9]. Processing of documents that are
of very poor quality due to seeping of ink from the other side of the page or gen-
eral degradation of the paper and ink, background noise or variation in contrast
and illumination are also found in the literature [10]. All the reported thresholding
methods have been demonstrated to be effective in constrained processing envi-
ronments with predictable images. However, pictures taken in real-life situations
may contain different artifacts such as shadow, non-uniform illumination. Proper
binarization of these images is very important for separating the foreground object
from the background. A good binarization will result in better recognition accu-
racy for any pattern recognition application.
Binarization can become a challenging job [10] under varying illumination
and noise. A number of factors contribute to complicate the thresholding scheme
including ambient illumination, variance of gray levels within the object and the
N. Chaki et al., Exploring Image Binarization Techniques, Studies in Computational 5
Intelligence 560, DOI: 10.1007/978-81-322-1907-1_2, © Springer India 2014
6 2 A Comprehensive Survey on Image Binarization Techniques
background, inadequate contrast. A wrong selection of threshold value may misin-
terpret the background pixel and can classify it as object and vice versa, resulting
in overall degradation of system performance. In document analysis, binarization
is sensitive to noise, surrounding illumination, gray-level distribution, local shad-
ing effects, inadequate contrast, the presence of dense non-text components such
as photographs. While at the same time, the merges, fractures, and other deforma-
tions in the character shapes affect the threshold value in OCR system.
The binarization methods can be categorized in different groups depending on
which principal criteria they consider in calculating the threshold. The method
proposed by Otsu [8] proposed a clustering analysis-based method based on image
variance. Methods proposed by Johannsen et al. [11] and Kapur et al. [12] are
entropy-based methods. Binarization methods based on image variance are pro-
posed by Sauvola et al. [13] and Niblack [14]. Bernsen [9] proposed a threshold-
ing approach based on image contrast. Kittler et al. [15] consider error measure
in calculating the optimal threshold. Some of the methods are discussed below in
brief.
Otsu’s method [8] is the most successful global thresholding method. It auto-
matically performs histogram shape-based image thresholding for the reduction
of a gray-level image to a binary image. The algorithm assumes that the image
for thresholding contains two classes of pixels (e.g., foreground and back-
ground) and then calculates the optimum threshold separating those two classes
so that their combined spread (intra-class variance) is minimal. It exhaustively
searches for the threshold that minimizes the intra-class variance, defined as the
weighted sum of variances of the two classes. The weighted within-class variance
is σw2 (t) = q1 (t)σ12 (t) + q2 (t)σ22 (t) where the class probabilities of different gray-
level pixels are estimated as:
t
255
q1 (t) = P(i) and q2 (t) = P(i)
i=0 i=t+1
And the class means are given by:
t 255
i ∗ P(i) i ∗ P(i)
µ1 (t) = and µ2 (t) =
q1 (t) q2 (t)
i=0 i=t+1
Total variance (σ2) = Within-class variance (σw2 (t)) + Between-class Variance (σb2 (t)),
where σb2 (t) = q1 (t)[1 − q1 (t)][µ1 (t) − µ2 (t)]2
Since the total variance is constant and independent of t, the effect of changing
the threshold is merely to move the contributions of the two terms back and
forth. Between-class variance is σb2 (t) = q1 (t)[1 − q1 (t)][µ1 (t) − µ2 (t)]2. Thus,
minimizing the within-class variance is the same as maximizing the between-class
variance. This method gives satisfactory results when the numbers of pixels in each
class are close to each other.
In locally adaptive thresholding algorithms, a threshold is calculated at
each pixel, which depends on some local statistics such as range, variance, or
2.1 Foundations of Image Binarization Techniques 7
surface-fitting parameters of the pixel neighborhood. In what follows, the
threshold T(i, j) is indicated as a function of the coordinates (i, j) at each pixel,
or if this is not possible, the object/background decisions are indicated by the
logical variable B(i, j). Niblack’s method [14] calculates pixel-wise thresh-
old by sliding a rectangular window over the gray-level image. This method
adapts the threshold according to the local mean m(i, j) and standard deviation
σ(i, j) and calculated a window size of b × b. The threshold T is denoted as:
T (i, j) = m(i, j) + k · σ (i, j).
Here, k is a constant, which determines how much of the total print object edge
is retained, and has a value between 0 and 1. The value of k and the size of the
sliding window define the quality of binarization. Binarization gives thick and
unclear strokes with a small k value, and slim and broken strokes with a large k
value. As for many applications, a 25 × 25 size for the sliding window and 0.6 as
the value of k have been found to be heuristically optimal. The size of the neigh-
borhood should be small enough to reflect the local illumination level and large
enough to include both objects and the background.
The method proposed by Sauvola et al. [13] is local-variance-based method. It
is an improvement on the method proposed by Niblack [14], especially when the
background contains light texture, big variations, stained and badly and unevenly
illuminated documents. It adapts the contribution of the standard deviation. For
example, in the case of text on a dirty or stained paper, the threshold is lowered.
The threshold is calculated as follows:
σ (i, j)
T (i, j) = m(i, j) ∗ 1 + k −1
R
The typical values of k = 0.5 and R = 128 are suggested. Here, m and σ are again
the mean and standard deviation of the whole window, and k is a fixed value. It
was found that the value of R has a very small effect on the quality while the val-
ues of k and window size affect it significantly. The smaller the value of k, the
thicker is the binarized stroke, and the more overlap exists between characters. A
smaller window size will produce thinner strokes. An optimal combination of k
and the sliding window will produce a good binary image.
Local adaptive method proposed by Bernsen [9] is based on contrast of an
image. The threshold is set at the midrange value, which is the mean of the mini-
mum Ilow(i, j) and maximum Ihigh(i, j) gray values in a local window of suggested
size w = 31. However, if the contrast C(i, j) = Ihigh(i, j) − Ilow(i, j) is below a cer-
tain contrast threshold k, the pixels within the window may be set to background
or to foreground according to the class that most suitably describes the window.
This algorithm is dependent on the value of k and also on the size of the window.
T(i, j) = 0.5 {maxw[I(i + m, j + n)] + minw[I(i + m, j + n)]}, where w = 31, pro-
vided contrast C(i, j) = Ihigh(i, j) − Ilow(i, j) ≥ 15.
The method proposed by Kapur et al. [12] is an entropy-based method which
considers the image foreground and background as two different signal sources,
so that when the sum of the two class entropies reaches its maximum, the image is
said to be under optimal thresholding.
8 2 A Comprehensive Survey on Image Binarization Techniques
In this method, two probability distributions (e.g., object distribution and back-
ground distribution) are derived from the original gray-level distribution of the
image as follows:
p0 p1 pt pt+1 pt+2 pl
, ,... and , ,...
Pt Pt Pt 1 − Pt 1 − Pt 1 − Pt
t
wheret is the value of threshold and Pt = i=0 pi
t l
pi pi pi pi
Hb (t) = − loge and Hw (t) = − loge
Pt Pt 1 − Pt 1 − Pt
i=0 i=t+1
The optimal threshold t*is defined as the gray level which maximizes
Hb (t) + Hw (t), i.e. t ∗ = arg Max{Hb (t) + Hw (t)} for all t belonging to the set of
all gray values in the image.
Thresholding can be considered as a classification problem. If the gray-level
distributions of the foreground object and background pixels are known or can be
estimated, then the optimal, minimum error threshold can be obtained using statis-
tical decision theory. This involves lots of computation. Therefore, it is realistic to
assume that the respective populations are distributed normally with distinct means
and standard deviations. Under this assumption, the parameters of the population
can be inferred from the gray-level histogram by fitting. Afterward, the correspond-
ing optimal threshold can be determined. A computationally efficient solution to
the problem of minimum error thresholding has been derived by Kittler et al. [15]
under the assumption of foreground object and background pixel gray-level values
being normally distributed. The principal idea behind the method is to optimize the
average pixel classification error rate directly, using either an exhaustive search or
an iterative algorithm. The method is applicable in multi-threshold selection.
2
1 (g − µi )
p(g) = Pi p(g|i), where, p(g|i) = √ exp 1 − .
i=1
2�σi 2σi2
The threshold value can be selected by solving the quadratic equation
(g − µ1 )2 2 (g − µ2 )2
+ log σ
e 1 − 2 log e P1 = + loge σ22 − 2 loge P2
σ12 σ22
However, the parameters µi, σi2 and Pi (i = 1, 2) of the mixture density p(g)
associated with an image for thresholding are not usually known. In order to over-
come the difficulty of estimating these unknown parameters, Kittler et al. intro-
duced a criterion function J(t) given by
J(t) = 1 + 2 P1 (t) loge σ1 (t) + P2 (t) loge σ2 (t)
− 2 P1 (t) loge P1 (t) + P2 (t) loge P2 (t)
2.1 Foundations of Image Binarization Techniques 9
where
t
l
P1 (t) = h(g), P2 (t) = h(g)
g=0 g=t+1
t l
g=0 h(g)g g=t+1 h(g)g
µ1 (t) = , µ2 (t) = ,
P1 (t) P2 (t)
t 2 l 2 h(g)
g=0 (g − µ1 (t)) h(g) g=t+1 (g − µ2 (t))
σ12 (t) = , σ22 (t) = .
P1 (t) P2 (t)
The optimal threshold is obtained by minimizing J(t), i.e., by finding
t ∗ = arg Min{J(t)} for all gray levels t belonging to the image.
The method proposed by Johannsen et al. [11] uses the entropy of the gray-
level histogram of the digital image as a measure of information. Essentially, it
divides the set of gray levels into two parts so as to minimize the interdepend-
ence between them. This method chooses the threshold value t* from the relation,
t ∗ = arg Min{S(t) + S ′ (t)} for all possible gray levels t in the image. Here,
t t−1 t−1
1
S(t) = loge pi − t pt loge (pt ) + pi loge pi
i=0 i=0 pi i=0 i=0
l−1 l−1
l−1
′
1
S (t) = loge pi − l−1 pt loge (pt ) + pi loge pi
i=t i=t pi i=t+1 i=t+1
A technique for determining a threshold for binarization of an image is presented
in [16]. The method follows an iterative process and assumes that the image con-
tains an object and background occupying different average gray levels. The iterative
method provides a simple automatic selection of the optimum threshold. Assuming an
object is located within a square region of the image; without any prior knowledge of
the exact location of the objects, it is considered as a first approximation that the four
corners of the scene contain only background pixels and the remainder contains the
object. Thresholding is done to come up with a path image. This patch may then be
used as a switching function f(s) to route a digitized image into one of two integrators.
In [17] a multi-scale binarization, framework is introduced, which can be used
along with any adaptive threshold-based binarization method. This framework
is able to improve the binarization results and to restore weak connections and
strokes, especially in the case of degraded historical documents. The framework
requires several binarization methods on different scales, which is addressed by
introduction of fast grid-based models. This enables to explore high scales which
are usually unreachable to the traditional approaches. In order to expand the set of
adaptive methods, an adaptive modification of Otsu’s method, called AdOtsu, is
10 2 A Comprehensive Survey on Image Binarization Techniques
introduced. In addition, in order to restore document images suffering from bleed-
through degradation, the authors combine the framework with recursive adaptive
methods. The framework shows promising performance in subjective and objec-
tive evaluations performed on available datasets.
An automatic histogram threshold approach based on a fuzziness measure is
presented in [18]. Using the concepts of fuzzy logic, the problems involved in
finding the minimum of a criterion function are avoided. Similarity between gray
levels is the key to find an optimal threshold. Two initial regions of gray levels
are defined at the boundaries of the histogram. After that using an index of fuzzi-
ness, a similarity process is started to find the threshold point. A significant con-
trast between objects and background is assumed. Histogram equalization is used
in images having small contrast difference.
Paper [19] presents an adaptive algorithm for efficient document image bina-
rization with low computational complexity and high performance. This is par-
ticularly suitable for use in portable devices such as PDA, mobile phones which
are marked by their limited memory space and low computational capability.
This method divides the document image into several blocks by integrating the
concept of global and local methods. After that a threshold surface is constructed
based on the diversity and the intensity of each region to derive the binary image.
Experimental results show the effectiveness of the proposed method.
A binarization method is presented in [20] based on edge information for video
text images. It attempts to handle images with complex background with low con-
trast. The contour of the text is detected, after that local thresholding method is
used to look for the inner side of the contour; subsequently, the contours of the
characters are filled up to form characters that are recognizable to OCR software.
A new document image binarization technique is presented in [21], as an
improved version of the adaptive logical-level technique (ALLT). The original
ALLT makes use of fixed windows for extracting essential features (e.g., the char-
acter stroke width). However, there are possibilities of characters with several dif-
ferent stroke widths within a region. This may lead to erroneous results. In [21],
local adaptive binarization is used as a guide to adaptive stroke width detection.
The skeleton and the contour points of the binarization output are combined to
identify the stroke width locally. In addition, an adaptive local parameter is defined
that enhances the characters and improves the overall performance achieving more
accurate binarization results for both handwritten and printed documents with a
particular focus on degraded historical documents.
In [22], the authors proposed a new technique for the validation of document
binarization algorithms. Authors claim that the proposed method is simple in its
implementation and can be performed on any binarization algorithm since it does
not require anything more than the binarization stage. As a demonstration of the pro-
posed technique, we use the case of degraded historical documents. The proposed
technique is evaluated with 30 binarization algorithms for performance comparison.
Images with two dominant intensity levels are subjected to manual threshold-
ing a ease. For automatic image thresholding, most of the effective techniques are
either too complex or too eager of computer resources.
2.1 Foundations of Image Binarization Techniques 11
The balanced histogram thresholding method [23] is a very simple method used
for automatic image thresholding. Like Otsu’s method [8], this is a histogram-based
thresholding method. Assuming that the image is divided into two main classes: the
background and the foreground, this method tries to find the optimum threshold
level that divides the histogram in two classes. This method weighs the histogram,
checks which of the two sides is heavier, and removes weight from the heavier
side until it becomes the lighter. It repeats the same operation until the edges of
the weighing scale meet. This method may have problems when dealing with very
noisy images, because the weighing scale may be misplaced. The problem can be
minimized by ignoring the extremities of the histogram.
Evaluation of document image binarization techniques is a tedious task that is
mainly performed by human experts or by involving an OCR engine. Paper [24]
presents a methodology for objective evaluation of document image binarization
algorithms. The methodology aims at reducing the human interference in the con-
struction of the ground truth and testing. A skeletonized ground truth image is cre-
ated by the user following a semiautomatic procedure. The estimated ground truth
image can aid in evaluating the binarization result in terms of recall and preci-
sion as well as to further analyze the result by calculating broken and missing text,
deformations, and false alarms.
Paper [25] presents a real-time adaptive using the integral image of the input.
The technique proposed is robust to illumination changes in the image suitable for
processing live video streams at a real-time frame-rate which makes it suitable for
the interactive applications.
2.2 Recent Works
In Sect. 2.1, we have discussed the broad area of our research by citing some of
the most significant works that have shaped the evolution in the relevant areas. In
this section, the state-of-the-art for image binarization methods is discussed for all
the areas considered in this work.
Binarization is an essential step for document image analysis. In general, differ-
ent available binarization techniques are implemented for different types of binari-
zation problems.
In [26], a learning framework for the optimization of the binarization methods
is introduced, which is designed to determine the optimal parameter values for a
document image. The framework works with any binarization method performs
three main steps: extracts features, estimates optimal parameters, and learns the
relationship between features and optimal parameters. An approach is proposed to
generate numerical feature vectors from 2D data. The statistics of various maps
are extracted and then combined into a final feature vector, in a nonlinear way.
The optimal behavior is learned using support vector regression (SVR). The
experiments are done using grid-based Sauvola’s method and Lu’s method on the
DIBCO2009 and DIBCO2010 datasets.
12 2 A Comprehensive Survey on Image Binarization Techniques
A pixel-based binarization evaluation methodology for historical handwritten/
machine-printed document images is presented in [3]. In the evaluation scheme
in [3], the recall and precision evaluation measures are properly modified using a
weighting scheme that diminishes any potential evaluation bias. Additional perfor-
mance metrics of the proposed evaluation scheme consist of the percentage rates
of broken and missed text, false alarms, background noise, character enlargement,
and merging. The validity of the method is justified by several experiments con-
ducted in comparison with other pixel-based evaluation measures.
An image binarization technique is proposed in [27] for degraded document
images that takes into consideration the adaptive image contrast. The adaptive
image contrast is a combination of the local image contrast and the local image
gradient that is tolerant to text and background variation caused by different types
of document degradations. An adaptive contrast map is first constructed for an
input-degraded document image. The contrast map is then binarized and combined
with Canny’s edge map to identify the text stroke edge pixels. The document text
is further segmented by a local threshold that is estimated based on the intensities
of detected text stroke edge pixels within a local window. It has been tested on
three public datasets achieving accuracies of around 90 %.
There are many challenges addressed in handwritten document image binari-
zation, such as faint characters, bleed-through, and large background ink stains.
Usually, binarization methods cannot deal with all the degradation types effec-
tively. Motivated by the low detection rate of faint characters in binarization of
handwritten document images, a combination of a global and a local adaptive
binarization method at connected component level is proposed in [4] that aims in
an improved overall performance. Initially, background estimation is applied along
with image normalization based on background compensation. Afterward, global
binarization is performed on the normalized image. In the binarized image, very
small components are discarded and representative characteristics of a document
image such as the stroke width and the contrast are computed. Furthermore, local
adaptive binarization is performed on the normalized image taking into account
the aforementioned characteristics. Finally, the two binarization outputs are com-
bined at connected component level. Authors report good performance after exten-
sive testing on the DIBCO series datasets which include a variety of degraded
handwritten document images.
An adaptive binarization method inspired by Otsu’s method is introduced in
[1]. The method, called AdOtsu, uses the estimated background (EB) as a priori
information to differentiate between text and non-text regions. The estimated
background values are calculated in a boot-strap process implicitly incorporating
the proposed binarization method. Also, a priori structural information, including
the average stroke width and the average text height, is used to adapt the method
on the input document image and to make it parameter-less. The method is gen-
eralized to a multi-scale binarization, which enables it to separate interfering pat-
terns from the true text using higher scales. Postprocessing corrections, both
topological and clustering, are considered to improve the final output.
2.2 Recent Works 13
Paper [5] proposes another algorithm for the binarization of degraded document
images. The image is mapped into a 2D feature space in which the text and back-
ground pixels are separable, and then this feature space is partitioned into small
regions. These regions are labeled as text or background using the result of a basic
binarization algorithm applied on the original image. Finally, each pixel of the
image is classified as either text or background based on the label of its corre-
sponding region in the feature space.
An adaptive binarization method for historical manuscripts and degraded docu-
ment images is reported in [6]. The method is based on maximum likelihood (ML)
classification using a priori information and the spatial relationship on the image
domain. The method performs a decision of thresholding based on a probabilistic
model. It recovers the main text in the document image, including low intensity
and weak strokes from an initialization map (under-binarization) containing only
the darkest part of the text. Fast and robust local estimation of text and background
features is obtained using grid-based modeling and in-painting techniques; after-
ward, the ML classification is performed to classify pixels into two classes (black
and white). This method preserves weak connections and provides smooth and
continuous strokes due to its correlation-based nature. Performance is evaluated
both subjectively and objectively against standard databases. The method produces
competitive results with state-of-the-art methods presented in the DIBCO2009
binarization contest.
The majority of binarization techniques are complex and are compounded
from filters and existing operations. However, the few simple thresholding meth-
ods available cannot be applied to many binarization problems. In [7], a local
binarization method is presented based on a simple, novel thresholding method
with dynamic and flexible windows. The method is tested on selected samples of
DIBCO 2009 benchmark dataset.
An adaptive water flow model for the binarization of degraded document
images is presented in [28]. In this approach, the image surface is regarded as a
three-dimensional terrain and water is poured on it. The water finds the valleys
and fills them. The algorithm controls the rainfall process, pouring the water, in
such a way that the water fills up to half of the valley depth. After stopping the
rainfall, each wet region represents one character or a noisy component. To seg-
ment each character, the wet regions are labeled and regarded as blobs. Some of
the blobs represent noisy components. A multilayer perceptron is trained to label
each blob as either text or non-text. The algorithm is shown to preserve stroke
connectivity. Experimental verification shows superior performance against six
well-known algorithms on three sets of degraded document images with uneven
illumination.
It is evident from the discussion in this chapter that there is a need of a bina-
rization algorithm that would work well for both document and graphic images.
In addition to this, we need a methodology for generating the reference image for
quantitative evaluation among different image binarization methods. In Chap. 3 of
this text, we have documented works that addresses these issues.
14 2 A Comprehensive Survey on Image Binarization Techniques
References
1. Moghaddam, R.F., Cheriet, M.: AdOtsu: an adaptive and parameter less generalization of
Otsu’s method for document image binarization. Pattern Recogn. 45(6), 2419–2431 (2012)
2. Gatos, B., Pratikakis, I., Perantonis, S.J.: Adaptive degraded document image binarization.
Pattern Recogn. 39(3), 317–327 (2006)
3. Ntirogiannis, K., Gatos, B., Pratikakis, I.: Performance evaluation methodology for historical
document image binarization. IEEE Trans. Image Process. 22(2), 595–609 (2013)
4. Ntirogiannis, K., Gatos, B., Pratikakis, I.: A combined approach for the binarization of
handwritten document images. Pattern Recogn. Lett. 35, 3–15 (2014). (ISSN 0167-8655,
http://dx.doi.org/10.1016/j.patrec.2012.09.026)
5. Valizadeh, M., Kabir, E.: Binarization of degraded document image based on feature space
partitioning and classification. Int. J. Doc. Anal. Recogn. (IJDAR) 15(1), 57–69 (2012)
6. Hedjam, R., Moghaddam, R.F., Cheriet, M.: A spatially adaptive statistical method for the
binarization of historical manuscripts and degraded document images. Pattern Recogn. 44(9),
2184–2196 (2011)
7. Bataineh, B., Abdullah, S.N.H.S., Omar, K.: An adaptive local binarization method for docu-
ment images based on a novel thresholding method and dynamic windows. Pattern Recogn.
Lett. 32(14), 1805–1813 (2011)
8. Otsu, N.: A threshold selection method from gray-level histogram. IEEE Trans. Syst. Man
Cybern. 9(1), 62–66 (1979)
9. Bernsen, J.: Dynamic thresholding of gray level images. In: Proceedings of International
Conference on Pattern Recognition (ICPR), pp. 1251–1255 (1986)
10. Gatos, B., Ntirogiannis, K., Perantonis S.J.: Improved document image binarization by
using a combination of multiple binarization techniques and adapted edge information. In:
Proceedings of 19th International Conference on Pattern Recognition (ICPR), pp. 1–4 (2008)
11. Johannsen, G., Bille, J.: A threshold selection method using information measures. In: 6th
International Conference on Pattern Recognition, pp. 140–143 (1982)
12. Kapur, N.J., Sahoo, P.K., Wong, C.K.A.: A new method for gray-level picture thresholding using
the entropy of the histogram. J. Comput. Vis. Graph. Image Process. 29(3), 273–285 (1985)
13. Sauvola, J., Pietikainen, M.: Adaptive document image binarization. Pattern Recogn. 33(2),
225–236 (2000)
14. Niblack, W.: An introduction to digital image processing, pp. 115–116. Prentice Hall,
Eaglewood Cliffs (1986)
15. Kittler, J., Illingworth, J.: Minimum error thresholding. Pattern Recogn. 19(1), 41–47 (1986)
16. Ridler, T., Calvard, S.: Picture thresholding using an iterative selection method. IEEE Trans.
Syst. Man Cyber. 8(8), 630–632 (1978)
17. Moghaddam, R.F., Cheriet, M.: A multi-scale framework for adaptive binarization of
degraded document images. Pattern Recogn. 43(6), 2186–2198 (2010)
18. Lopes, N.V., Mogadouro do Couto, P.A., Bustince, H., Melo-Pinto, P.: Automatic histogram
threshold using fuzzy measures. IEEE Trans. Image Process. 19(1), 199–204 (2010)
19. Pai, Y.T., Chang, Y.F., Ruan, S.J.: Adaptive thresholding algorithm: efficient computa-
tion technique based on intelligent block detection for degraded document images. Pattern
Recogn. 43(9), 3177–3187 (2010)
20. Zhou, Z., Li, L., Tan, C.L.: Edge based binarization for video text images. In: Proceedings of
20th International Conference on Pattern Recognition (ICPR), pp. 133–136 (2010)
21. Ntirogiannis, K., Gatos, B., Pratikakis, I.: A modified adaptive logical level binarization tech-
nique for historical document images. In: Proceedings of 10th International Conference on
Document Analysis and Recognition, pp. 1171–1175 (2009)
22. Stathis, P., Kavallieratou, E., Papamarkos, N.: An evaluation technique for binarization algo-
rithms. J. Univers. Comput. Sci. 14(18), 3011–3030 (2008)
23. Anjos, A., Shahbazkia, H.: Bi-level image thresholding—a fast method. Biosignals 2, 70–76
(2008)
References 15
24. Ntirogiannis, K., Gatos, B., Pratikakis, I.: An objective evaluation methodology for docu-
ment image binarization techniques. In: 8th IAPR Workshop on Document Analysis Systems
(2008)
25. Bradley, D., Roth, G.: Adaptive thresholding using the integral image. J. Graph. Tools 12(2),
13–21 (2007)
26. Cheriet, M., Moghaddam, R.F., Hedjam, R.: A learning framework for the optimization and
automation of document binarization methods. Comput. Vis. Image Underst. (CVIU) 117(3),
269–280 (2013)
27. Su, B., Lu, S., Tan, C.L.: Robust document image binarization technique for degraded docu-
ment images. IEEE Trans. Image Process. 22(4), 1408–1417 (2013)
28. Morteza, V., Ehsanollah, K.: An adaptive water flow model for binarization of degraded doc-
ument images. Int. J. Doc. Analysis Recogn. (IJDAR) 16(2), 165–176 (2013)