0% found this document useful (0 votes)
76 views6 pages

Abnormality Detection

This document summarizes an approach for detecting abnormalities in retinal images for automated diabetic retinopathy screening. The approach uses color difference images to detect bright objects like exudates and dark objects like hemorrhages. The Luv and Lab color models were found to perform best based on hand-labeled feature maps. Color difference images are generated from these models and watershed transform is used to extract object candidates, with pre-thresholding and post-verification to address over-segmentation. The approach detects abnormalities based on their color and shape for automated mass screening and diagnosis of diabetic retinopathy.

Uploaded by

asish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views6 pages

Abnormality Detection

This document summarizes an approach for detecting abnormalities in retinal images for automated diabetic retinopathy screening. The approach uses color difference images to detect bright objects like exudates and dark objects like hemorrhages. The Luv and Lab color models were found to perform best based on hand-labeled feature maps. Color difference images are generated from these models and watershed transform is used to extract object candidates, with pre-thresholding and post-verification to address over-segmentation. The approach detects abnormalities based on their color and shape for automated mass screening and diagnosis of diabetic retinopathy.

Uploaded by

asish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Abnormality Detection in Automated Mass Screening System of

Diabetic Retinopathy
Gang Luo, Opas Chutatape, Huiqi Lei, Shankar. M. Krishnan
School of EEE, Nanyang Technological Universiy, Singapore 639798

Abstract

An approach of abnormality detection from color jiindirs images for automated mass
screening system is proposed in this paper, which uses the object-based color difference
image. Four color models, i.e. RGB, Luv, Lab and HVC are evaluated based on the hand
labeled feature maps, and Luv and Lab are selectedfor computing color difference because
of their good performance of object classification. The object-based color dflerence image of
bright objects. e.g. exudates and drusen and dark objecis. e.g. hemorrhages and blood vessel
are obtained respectively according to the 2 0 histogram distribution on L-Uplane, and then
. watershed transform is perJbrmed on the color difference image to extract object candidates.
A pre-thresholding and a post-verification procedure are performed to deal with the over-
segmentation problem of watershed transjorm
Keywords- fundus image, diabetic retinopathy, abnormality detection, color difference.
watershed.

1. Introduction
Diabetic retinopathy is a common cause of visual loss in the world. According to the report
of the National Institute of Diabetes & Digestive & Kidney Diseases, there are 15.7 millions
people, 5.9 percent of the population with diabetes in USA, and the diabetic retinopathy
causes from 12,000 to 24,000 new cases of blindness each year. To prevent the progression of
the retinopathy or blindness, some grading systems and classifications of diabetic retinopathy
have been proposed to assess the severity[ 1][2].
Digital image processing techniques can help to extract the location and size/level of
abnormalities, give an objective grade and compare the changes in objects in sequential
images. Although it is far from the real capability of ,an ophthalmologist, it could still be
possible to develop a system to deal with the expensive .and time consuming manual process.
This system can make an initial diagnosis based on the retinopathy grading criteria by
comparing images, measuring key features, annotating image contents, and then select the
undiagnosed people in high risk [3].
Abnormality detection is the first step in an automawd screening system before making
diagnosis. Based on the grading criteria proposed by the ETDRS (Early Treatment Diabetic
Retinopathy Study) group of Fundus Photograph Reading Center, University of Wisconsin,
the abnormalities can be divided into three classes a:; follows from the angle of image
processing.
Abnormal spot class: microaneurysms, hemorrh,ages, drusen, hard exudates, soft
exudates, vitreous hemorrhage, scars of prior photocoagulation, etc.
Abnormal blood vessel class: intraretinal microvascular abnormalities, venous
abnormalities, arteriolar abnormalities, arteriovenous nicking, new vessel, dilated tips of
new vessels elsewhere, papillary swelling, etc.
Abnormal stereo measurement: plane of proliferation elsewhere, retina elevation,
retinal thickening, etc.

132
133
The abnormalities of first class can be detected based on their color and shape. The second
class can be extracted by blood vessel detection techniques, which have been studied by
many researchers for years in cardiology as well as ophthalmology. Some successful
techniques of vessel detection have been developed [4][5][6].For the third class,, the stereo
reconstruction techniques are needed to do the measurement, which can give accurate and
objective evaluation. This paper focuses on the detection of the first abnormalities class.

2. Color model selection


In fundus images, the obvious normal physiological structures include optic disc, blood
vessel and macula. Along with the abnormal objects of first class, the visible objects can be
divided into white or yellowish objects and dark or reddish objects. Image background,
whose color is medium, can be considered as the third object. In this paper, the detection of
these objects is based on their color rather than just intensity or any single color component.
The exact color of an object can be represented with several color models but their
suitabilities for image processing must be evaluated in order to find out an optimized one for
recognizing the objects in fundus images. In this paper, RGB, Lab, Luv and HVC color
models are assessed as described below.
Firstly, the white objects (said F1 here) and dark objects (said F2) of the samples of
fundus image are hand-labeled respectively. Secondly, each pixel of the images is tagged
with one of the three objects, white, dark and background objects (said F3) according to the
hand-labeled figure. Finally, the maximum sensitivity with every color component is obtained
by scanning the color coordinate and finding the optimized threshold.
Because the fundus is not a plane (spherical face), the illumination is uneven. The
brightness of objects and background in central region is usually higher than that in
surrounding region, and the color of background in central region may be brighter than the
color of exudates in the surrounding region. In order to deal with the uneven illumination, the
fundus images are divided into small blocks and the images are analyzed blockwise in color
model assessment as well as abnormality detection. The block size is 64 by 64 in this paper.

0.8
0.6
0.4
v, 0.2
0

Color components

Fig. 1. Sensitivity of different color components for object classification


In total 180 blocks were analyzed. The assessment results are shown in Fig. I . It can be
seen that Lab-a or Luv-U has high sensitivity and equal performance for F1, F2 and F3.
RGB-g also has good performance but the distribution of object colors in RGB space does not
follow a clear and fixed pattern, unlike in Luv or Lab space as described in next section.
Therefore, Luv and Lab are selected as the color space for image processing. They are
perceptually uniform color models recommended by CIE (Commission Intemationale de
I'Eclairage), in which the measurement of color difference is close to that of human
perception. This will help enabling the diagnosis software to have a similar color perception
to ophthalmologists.
134
3. Abnormality detection
3.1. Object based color difference image
Fig. 2 shows the two contour ‘L
maps of 3-D histogram of one
block of a h d u s image in Luv
color space plotted on L-U and
L-v planes. After a number of
careful observations, it is found
that the histogram distributions
of all blocks follow three
common patterns as follows.
Fig. 2. Contours of the 3D histogram distribution of
one block of a color fundus image in Luv space.
The high region corresponds to the background except the special case mentioned in
the next paragraph.
In L-U figure, the northwest quadrant relative to high region corresponds to the bright
objects such as optic disc, exudates and drusen.
In L-U figure, the southeast quadrant relative to high region corresponds to the dark
objects such as hemorrhages and blood vessels.
It is difficult to discriminate objects along the v color coordinate. Therefore v color
component is not used in the object detection.
The color coordinate of background is different from block to block because of the uneven
illumination. For most image blocks the background can be located correctly by finding the
high region in L-U histogram figure, but for some blocks in which bright objects (usually
optic disc) occupy the major area the criteria of finding high region will fail. Therefore, the
lightness of background obtained by this method should be checked by comparing with the
background surrounding it. Fig. 3 shows the lightness array of high region of a hndus image
blockwise. Obviously, the value of a block on left side is quite different from the others,
which is due to optic disc. In this case, the lightness of high region is not of background so
the background of this block is replaced with that of block beside it.
__
After the background is identified, a
round region centered at the histogram
peak is delimited as the background
candidate. The northwest quadrant I
I

1 ’ 1
i
outside this region is treated as the
i’ I
1
bright object region and the southeast
11
quadrant, the dark object region. For
each of bright object region and dark j 5
I
object region, the location of gravity
center is calculated as the reference j
color of the object with following I
equation. Fig. 3. Lightness array of high region of
histogram of fundus image blockwise.
L , = average(x L, x h ( ~,u, , ))
u, =average(Cu, x h ( ~ , , u , ) ) (L,,u,)cotdectregion

The star and circle symbol in Fig. 2 indicate the reference colors of bright object region
and dark object region respectively.
135
The object based color difference image of an object is generated by computing the color
difference between the object reference color and the color of all image pixels with following
equation
D,,=&, -I.,? +(U,, -U,?

Fig 4 shows the color difference images of bright object and dark object, in which objects
appear as dark pixels The blank block means that there is no object candidate found in that
block

Fig. 4. Object based color difference images- bright objects (a) and dark objects (b). Dark
spots correspond to objects.

3.2. Object detection


The object detection is also performed blockwise. The basic approach is to detect spot
object with watershed transform of gradient modification [7][8]. The watershed transform is a
very sensitive detection method whose one common problem is the over-segmentation. It is
partly due to noise but in many cases it is caused by some irrelevant objects or some minor
pattems. In this paper, a pre-thresholding and a post-verification procedure are performed to
deal with the over-segmentation problem. The detection procedure is as follows.
A. De-noising by filtering the image with close and open alternating sequential filters. A
structuring element of 3 by 3 cross is used.
B. Obtaining inner markers by detecting the regional minima of image, and then
obtaining the watersheds of filtered image using the inner markers to be the outer
markers. A thresholding is performed before marker detection in order to avoid the
minima in non-feature regions to be extracted. e.g. background. All the pixels whose
value is larger than the threshold are set to be maximum pixel value. The threshold is
obtained according to the histogram of the color difference image (Fig. 5). As the
peak usually corresponds to the background, the threshold is set at the maximum
differential point at the left side of peak. For the blocks that feature object occupy
major area as mentioned above, the peak corresponds to the object. In such case, the
threshold is set at the peak point. The threshold does not need to be accurate because
it is not for segmentation but for obtaining markers (Fig. 6b).
C. Performing watershed transform on the morphologic gradient image superimposed by
inner markers and outer markers (Fig. 6c).
D. Verifying watershed results by dilating the watershed contour and then checking the
difference between the mean of the pixel values along the inner contour and that
along the outer contour. If the difference is smaller than a threshold the watershed is
erased (Fig. 6d).
136
E. For the dark object detection, e.g. hemorrhages, the blood vessels will be extracted in
form of many spots distributing along the vessels (Fig. 60. Therefore, the result of
other successful blood vessel detection [6] is used to mask these spots to leave non-
blood vessel dark objects, e.g. microaneurysnis and hemorrhages (Fig. 6g).

threshold

I/ \ histogram

60 120 180
Pixel value
Fig. 5. Histogram of color difference image-thresholding to suppress irrelevant objects
(relevant objects corresponld to dark pixel)

Fig. 6. Watershed transform of one image block. (a) Color difference image of bright objects; (b)
Inner marker and outer marker for watershed transform; (c) Watershed results of (b); (d)
Verification of watershed results by checking pixel value difference between inside and outside
watershed; (e) Color difference image of dark objects; (4Watershed results of (e); (9)
Verification of watershed results by masking blood vessel and pixel value checking.
Although the optic disc can be annotated by the above watershed method together with
post-understanding, it is not discussed in this paper. We simply use available techniques to
locate the two normal objects [9][lO]. Because there is no preset abnormality on optic disc
the obtained bright objects in its region are erased Finally the abnormalities leave. The
results of abnormality detection of a fundus image art: shown in Fig. 7. One can see most of
abnormalities belonging to the first class are extracted.

4. Conclusion
An approach of watershed transform for the abnormality detection in color fundus images
is proposed in this paper, which is performed on the object-based color difference image. In
the color difference image obtained from Luv or Lab color space, the bright object and dark
object are highlighted respectively. With a pre-thresholding to suppress undesired
background and minor objects and a post-verification to erase obscure candidates, blood
137
vessels and optic disk, the watershed transform is successfully utilized to extract the
abnormalities without the over-segmentation problem.

Fig. 7. Results of bright (a) and dark (b) abnormalities extraction

References:
Early treatment diabetic retinopathy study research group, “Grading diabetic retinopathy from stereoscopic
color fundus photographs-an extension of the modified Airlie House classification,” Ophthalmologv. vol 98,
pp. 786-806, May, 1991.
Diabetic retinopathy study group, “A modification of the Airlie House classification of diabetic retinopathy,”
Ophthalmol Vis Sci, vol. 2 I , pp. 2 10-226, I98 I .
M. Goldbaum, S. Moezri, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter and R. Jain, “Automated diagnosis and
image understanding with object extraction, object classfication, and inferenceing in retinal images,” in Proc.
Inter. Conf on Image Processing, vol. 3, pp. 695-698, 1996.
Adam Hoover, Valetina Kouznetsova and Michael Goldbaum, “Locating blood vessels in retinal images by
piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag.. vol. 19. no. 3. pp. 203-
210,2000.
A. Can, H. Shen, J. Turner, H. Tanenbaum, and B. Roysam, “Rapid automated tracing and feature extraction
from retinal fundus images using direct exploratory algonthms,” IEEE Trans. Information Tech. I n
Biomedicine, vol. 3, pp 125-138, 1999.
Gang Luo, @as Chutatape and S. M. Krishnan, “Performance of amplitude modified sccond-order Gaussian
filter for the detection of retinal blood vessel,’’ in SPlE Bios 2001, Proceeding of Ophthalmic Technologies.
vol. 4245, San Jose, Jan., 2001
Luc Vincent and P. Soille, “Watersheds in digital spaces: an efficient algorithm based on immersion
simulations,”lEEE Trans. PAMI, vol. 13, pp. 583-598, June, 1991
S. Beucher and F. Meyer, “The morphological approach to segmentation: the watershed transformation,” in
Mathematical Morphologv in Image Processing, New York. Marcel Dekker, 1993. chapter 12. pp. 433-481
Huiqi Li, Opas Chutatape, S. M. Krishnan and Doric Wong, “Automatic Detection of Exudates in the Fundus
Image”, Proceedings of Image and Vision Compntmg’00 New Zealand, Hamilton, pp. 322-326, Nov, 2000.
[IO] C. Sinthanayothin,-J. Boyce, H. Cook and T. Williamson, “Automated localization of the optic disc, fovea
and retinal blood vessels from digital color fundus images,” British Jo/olirnalof Ophthalmologv, vol. 83, pp.
902-910, 1999.

You might also like