0% found this document useful (0 votes)
27 views6 pages

Haircolorde Identification

Uploaded by

DREAM. DaN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views6 pages

Haircolorde Identification

Uploaded by

DREAM. DaN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/304292507

Automatic hair color de-identification

Conference Paper · October 2015


DOI: 10.1109/ICGCIoT.2015.7380559

CITATIONS READS
6 769

5 authors, including:

Jiri Prinosil Kamil Riha


Brno University of Technology Brno University of Technology
48 PUBLICATIONS 386 CITATIONS 86 PUBLICATIONS 995 CITATIONS

SEE PROFILE SEE PROFILE

Malay Kishore Dutta Anushikha Singh


Amity University Amity University
342 PUBLICATIONS 5,019 CITATIONS 55 PUBLICATIONS 916 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Jiri Prinosil on 30 January 2017.

The user has requested enhancement of the downloaded file.


Automatic Hair Color De-identification
Jiri Prinosil, Ales Krupka, Kamil Riha Malay Kishore Dutta, Anushikha Singh
Brno University of Technology Department of Electronics and Communication Engineering,
Brno, Czech Republic Amity University, Noida, India.
prinosil@feec.vutbr.cz, rihak@feec.vutbr.cz, malaykishoredutta@gmail.com, anushikha4june@gmail.com
akrupa@phd.feec.vutbr.cz

Abstract—A process of de-identification used for privacy be directly used for rough individual detecting in video signal
protection in multimedia content should be applied not only for in case if there are no a prior-known individual’s primary
primary biometric traits (face, voice) but for soft biometric traits biometric traits.
as well. This paper deals with a proposal of the automatic hair
color de-identification method working with video records. The The improvement in the biometric based identification area
method involves image hair area segmentation, basic hair color brings the question of privacy protection. Thus a process of
recognition, and modification of hair color for real-looking de- concealing the identities of individuals captured in multimedia
identified images. records is required. Such processed is called de-identification
and is usually applied on primary biometric traits (face, voice),
Index Terms—Privacy, Hair Color, Segmentation, De- but it can be applied on soft biometric traits. For example hair
identification
color.
I. INTRODUCTION The paper deals with proposal of an automatic method for
In the last decade, great amount of research in field of human hair color de-identification in video records. Section II
digital image processing have been focused on methods for describes the state of the art of hair segmentation techniques,
detection, tracking and analyzing of particular objects in Section III describes the proposed method for hair region
unknown scenery. Humans and their biometrical features (body segmentation from video, in Section IV hair color de-
shape, face, eyes, etc.) are one of the most frequent objects of identification is described, and Section V then concludes the
interest in this area. Therefore it is not surprising that many of results and according to them proposes a direction of the future
these methods find their place in practical applications of work. Although the de-identification of hair color has not been
everyday life (such as face/smile detection in cameras, the aim of scientific research yet.
pedestrian counting, gesture recognition in human-machine
interfaces and others). Many of these algorithms are also used
in CCTV (Closed Circuit Television) surveillance systems. II. STATE OF THE ART
Modern CCTV surveillance systems have not been used only Although the de-identification of hair color has not been the
for monitoring and recording of the scene video signal, but also aim of scientific research, there are several publications dealing
for detection of various events (such as access to unauthorized with topic of hair segmentation which is important part of an
areas), traffic analysis (e.g. car detection and license plates automatic hair color de-identification.
recognition) and person identification on the basis of the
biometric image data. In [3], hair area is detected based on sliding window which
In the field of biometry we can distinguish two types of evaluates the hair of color. In [4], color and frequency
biometric traits. The primary (classical) biometric traits information is used for creating seeds. Hair is then extracted
include physiological and behavioral characteristics, which are using matting process using the seeds. In [5], hair is segmented
unique for each individual (such as fingerprint, iris, face, gait using Graph-Cut and Loopy Belief Propagation. In [6], hair
and others). The soft biometric traits are then defined: “Soft seeds are detected and a growing of hair region is applied
biometric traits are physical, behavioral or adhered human based on color and texture features. In [7], the approach of seed
characteristics, classifiable in pre-defined human compliant
identification and consecutive propagation is used. This
categories. These categories are, unlike in the classical
procedure is done in two stages where the second stage uses a
biometric case, established and time-proven by humans with
the aim of differentiating individuals. “ [1]. This means that specific hair model based on the first stage results. In [8], the
the soft biometry traits provide some information about the hair seed patches are obtained via active shape and active
individual, but lack the distinctiveness and permanence to contours. These areas are then used to train a model of hair
sufficiently differentiate any two individuals. Typical soft color and texture. According to this model, the final hair area is
biometry traits are gender, age, body proportional, hair color, determined. In [9], selected hair and background seed regions
etc. The soft biometry traits can be utilized in combination are used for online support vector machines (SVM) model
with primary biometric traits for biometric identification training. This model is then used to differentiate between other
systems enhancement, as it was described in [2]. Or they can hair/background pixels. In [10], the coarse hair probability map

978-1-4673-7910-6/15/$31.00 2015
c IEEE 732
is estimated and this map is consequently refined using of the actual pixel value I(x,y)with the background model
Isomorphic Manifold Inference Method to get optimal hair represented by B(x,y) using the following:
region. In [11], part-based model is proposed together with a
way of modeling relations between the parts of head and hair ŇI (x, y) – B (x, y) Ň > T (1)
which helps to a better hair identification.
where Th is a suitable threshold which decides if the given
The previous works are designated to work with static pixel belongs to the background or if it is foreground.
images and therefore the need to distinguish between a head Background model is then estimation by mixture of Gaussians
and a background exists. The motivation of this work is to be [12] implemented in OpenCV library is used. This
able to estimate a hair color of people in a video-sequence, so implementation also addresses shadow detection, thus
this soft biometric trait can be extracted in real-time. The fact shadows appearances can be eliminated during silhouette
of using video sequence significantly simplifies the solution of extraction. Example of motion detection is shown on Fig. 2.
head/background separation and therefore the hair
segmentation procedure can be simplified in advance of
shortening the processing time.

III. HAIR SEGMENTATION


The proposed hair-segmentation method presumes the
usage of video-sequences and thus it cannot be utilized for
static images. The scheme of the method can be seen in Fig. 1. Fig. 2. Examples of motion estimation (a) input image, (b) background model,
A hair is determined as a difference between head and skin (c) motion mask.
area of a head. Every frame of the video-sequence is examined
by a face detector for a face occurrence and it is also supplied B. Face and facial features detection
to a background subtractor. If a face in the frame is detected,
For the face detection, the fast and robust method described
the position of the face is used for a segmentation of head. A
in [13] is utilized. This method deals with color images to
silhouette of the person is obtained from the background
model skin-color-like similarity. The estimated similarity map
subtractor and the head is given as the part of the silhouette.
is then used as input to the widely used cascade Viola-Jones
This part is specified by the position returned by the face
detector [14]. Using the detector, a face occurrence in a frame
detector. As the head mask is given, the skin area in the head is
can be located and a sub-window containing face is specified.
needed to be found. This is performed utilizing information
Detected face is further tracked by using rectangular grid,
about eyes position and nose position. This information is
which is updated by optical flow technique [15].
obtained during the face detection stage and it is used for a
selection of proper points, which are used as seeds for a Upon detected face the positions of 9 facial features are
flooding procedure. Using the flooding procedure, the skin area estimated (see Fig. 3). Approach based on pictorial structures
is defined.. Finally, the hair mask is given as the difference described in [16] is applied for this purpose. The authors
between the head segment and the skin segment. utilized a generative model of the facial feature positions
combined with a discriminative model of the facial feature. The
probability distribution over the joint position of the features is
modelled using a mixture of Gaussian trees that is a Gaussian
mixture model, where the covariance of each component is
restricted to form a tree structure with each variable dependent
on a single “parent” variable. Using tree-structured covariance
enables efficient search for the feature positions using distance
transform methods [17]. The appearance of facial features is
independent of the other facial features and is modeled
discriminatively by a feature/non-feature classifier trained
using a variation of the AdaBoost algorithm and using the
Haar-like wavelets.
C. Head segmentation
Fig. 1. The block scheme of the hair-segmentation method.
A head in the frame is obtained using the silhouette from
the background subtractor and the face position from the face
detector. The face position is presented by a rectangle with
A. Background subtraction face. This rectangle is enlarged by factor 1.5 in order to cover
The background subtraction technique tries to estimate a the whole head area. This rectangle thus contains a part of the
model of image background and apply this model for moving silhouette corresponding to the head. Usually, the silhouette
object detection. The detection relies on periodical comparison obtained from the background subtractor is not ideal.

2015 International Conference on Green Computing and Internet of Things (ICGCIoT) 733
Concretely, it can consist of unconnected regions and thus the homogeneous than a texture of hair, so the flooding procedure
part corresponding to the head cannot be used as a head mask forms more compact shapes on skin parts.
directly.

Fig. 4. (a) Segmented head, (b) skin segmentation, (c) hair

.
Fig. 3. Example of facial feature detection

Thus, a convex hull is constructed from the point set which


is given as the union of regions in the head area. This convex
hull then represents the head mask. The convex hull is
constructed only from the pixels of the upper rectangle part to Fig. 5. (a) Binary shape, (b) distance image, (c) resulting selection.
avoid including pixels of arms. The resulting head mask is
shown in Fig. 4(a). Although the head mask does not cover all Here follows the selection principle of a compact part: The
the head area, for the intended purpose it is sufficient to have flooded area is represented by the binary shape as shown in
hair from the top of the head. Fig. 5(a). Further, the distance transform of morphological type
[18] is applied on this shape to get distance image illustrated in
D. Skin segmentation Fig. 5(b). In this image, the appropriate regional maximum is
The skin segmentation is the most crucial part of the found according to the position of the current seed-point (black
processing. When a skin of a head is segmented correctly, then pixel in Fig. 5(b)). From this regional maximum, the process of
a difference between the head mask and the skin mask defines descending to lower levels of distance image is performed in
a hair mask. The floodfilling approach is used for the skin individual steps. At the beginning, the appropriate regional
segmentation. For the floodfilling, an optimal chrominance maximum is marked as positive. The other regional maxima
component range was selected (empirically determined from are marked as negative. Then, the descending starts. In every
YCbCr color model). The flooding starts repeatedly from the step, pixels of a current level are marked as positive or
different seed-points, where the seed-points position negative. The pixels neighboring to negative pixels are marked
corresponds to detected facial features positions. The reason as negative. The other pixels of the current level connected
for the usage of multiple seed-points is that the skin color with positive pixels are marked as positive. These steps are
varies in the different places of a face and thus the floodfilling repeated until the level of one is reached. The compact part is
would not work satisfactorily. When using multipleseed-points, then composed from positive pixels as can be seen in Fig. 5(c).
the area with similar color around a particular seed-point is
flooded. Then, the skin area is obtained as a union of the areas
flooded from the different seed-points. IV. HAIR COLOR DE-IDENTIFICATION
Contour describing hair region is the output of the hair
Similar as in the case of head segmentation, such the union segmentation process. The aim of hair color de-identification is
of flooded areas does not give an ideal skin mask. For example, to modify values of all pixels within the contour. The HSV
the eyes’ area is not flooded because the pixels’ values are too color space is more suitable than original RGB color space for
different from seed point values. Thus, the same approach as purpose of pixels’ color value handling. So the hair region is
during head segmentation, i.e., a convex hull of a union of converted into the HSV color space where each pixel’s value is
flooded areas, is constructed. The final hair area is then given defined by three components: hue of the color, color saturation
as the difference between the head and the skin area. The and illumination intensity value, these components can be
example of hair segmentation is show in Fig. 4. modified separately. After the modifications are done the
image area is converted back into the RGB color space.
When a color of hair is similar to a skin color, the flood can
also propagate into an area of hair. Therefore, to obtain the The initial idea was to add to all three color components
correct skin segment, only a part of flooded area around the different random values, which were constant for all pixels.
current seed-point is selected. The pixels around the seed-point Particular component’s values are in range 0-255, so if a new
are considered to be a skin as long as the shape of flood shrinks value exceeds this range the integer reminder after dividing
when going away from the seed point. This way, only a 255 is used instead of it. Although this approach provides a
relatively compact part of the flooded area is selected. This is high degree of de-identification, the result image looks
based on the assumption that a texture of skin is more artificial (see Fig. 6).

734 2015 International Conference on Green Computing and Internet of Things (ICGCIoT)
• Brown ĺ white: the contrast enhancement
algorithm is applied, saturation component is non-
liner compressed and value component non-linear
expand.
All the conversions can be performed reversible, i.e. it is
possible to get brown hair color for all the remaining hair
colors. The only exception is when original hair color is black
or white because in this case due to low color saturation the
hue component is not defined (it can be random). Example of
applying of proposed de-identification algorithm is shown in
Fig. 6. (a) original image, (b) artificial hair color de-identification. Fig. 7.

As our experiments proved a real-looking hair color


modification cannot be achieved by applying the same
approach for all the hair color types. So the prior hair color
recognition is required. For this purpose the algorithm
described in [19] was utilized. The algorithm allows
recognizing between 5 basic hair colors (black, brown, blond,
red and white). The algorithm was evaluated using video-
sequence database created in cooperation with JIMI CZ a.s. for
development and evaluation of biometric identification
algorithms. The database consists from video records with (a) (b)
mage resolution 1080×1920 pixels and 25 frames per second
containing 40 individuals captured during 1 to 4 sessions
organized within one year. For the purpose of the hair color
recognition algorithm evaluating, individuals’ hair color of
each session were manually labeled. Number of individuals
included in database for each basic hair colors is: black - 11,
brown - 13, red – 1, blond – 8, white/gray – 5. The result of
evaluation is shown in Table 1 in form of confusion matrix.

(d) (d)
TABLE I. CONFUSION MATRIX OF HAIR COLOR RECOGNITION.

Hair color black brown red blond white/gray

black 26 4 0 0 0

brown 3 27 1 2 0

red 0 1 2 0 0

blond 0 0 0 20 0

white/gray 0 0 0 0 11

(e)
The further hair color modification step depends on Fig. 7. (a) Original image (brown hair color), (b-e) natural hair color de-
identification (black, red, blonde and gray/white hair color).
estimated hair color. In this paper the brown color is chosen as
the reference one and the way how to convert it into other basic
colors is described.
V. CONCLUSION AND FUTURE WORK
• Brow ĺ red: constant value is subtracted from
hue component to shift it into red color domain. The main objective of this paper was the proposal of the
automatic hair de-identification method. This method analyses
• Brown ĺ black: saturation a value components and modifies hair color of human from the near-to-frontal view
are non-liner compressed (high values are and it is applicable on video-sequences only. De-identified
suppressed). real-looking images with chosen basic hair color are result of
• Brown ĺ blond: contrast enhancement algorithm this method. The proposed method also includes several
retinex [20] is applied on whole hair area, constant limitations which can be formulated as suggestion for future
value is added to hue and value components. research works:

2015 International Conference on Green Computing and Internet of Things (ICGCIoT) 735
In case of pure white or black hair color the hue component segmentation,” 16th IEEE International Conference on Image
is not defined, so the de-identified image with other than white Processing, Nov. 2009, pp. 2401-2404.
or black color looks artificial. [8] P. Julian, C. Dehais, F. Lauze, V. Charvillat, A. Bartoli, and A.
Since the hair area segmentation algorithm is not pixel Choukroun, “Automatic Hair Detection in the Wild,” D. Wang,
precise, several pixels near segmented area contains original S. Shan, W. Zeng, H. Zhang, and X. Chen,
hair color information a thus can be used for original hair color “A novel two-tier Bayesian based method for hair
segmentation,” 20th International Conference on Pattern
identification. This could be solved by blurring of image in Recognition, Aug. 2010, pp. 4617-4620.
borders of hair area. However this decreases an image quality a
[9] D. Wang, X. Chai, H. Zhang, H. Chang, W. Zeng, and
bit.
S. Shan, “A novel coarse-to-fine hair segmentation method,”
In this work only pure basic hair colors were considered, IEEE International Conference on Automatic Face & Gesture
but in real world videos more complex hair colors exist. Recognition and Workshops, March 2011, pp. 233-238.
Recognizing and modifying such complex hair colors for [10] D. Wang, S. Shan, H. Zhang, W. Zeng, and X. Chen,
achieving reliable and real-looking de-identified records is “Isomorphic Manifold Inference for hair segmentation,” 10th
great challenge for future research. IEEE International Conference and Workshops on Automatic
Face and Gesture Recognition, April 2013, pp. 1-6.
ACKNOWLEDGMENT [11] N. Wang, H. Ai, and F. Tang, “What are good parts for hair
Research described in this paper was financed by the shape modeling?,” IEEE Conference on Computer Vision and
National Sustainability Program under grant LO1401. Pattern Recognition, June 2012, pp. 662-669.
International cooperation in frame of COST IC1206 was [12] Z. Zivkovic, “Improved adaptive Gausian mixture model for
supported by Czech Ministry of Education under grant no. background subtraction,” Proceedings of the 17th International
LD14091. For the research, infrastructure of the SIX Center Conference on Pattern Recognition, Aug. 2004, pp. 28-31.
was used. [13] J. Prinosil, Z. Smekal, “Robust Real Time Face Tracking
System”, 32nd International Conference on Telecommunications
and Signal Processing TSP 2009, 2009, pp. 101-104
REFERENCES [14] P. Viola, M. Jones, “Robust Real-time Object Detection”,
Vancouver, Canada, 2001
[1] A. Dantcheva, C. Velardo, A. D'Angelo, J. Dugelay, "Bag of
Soft Biometrics for Person Identification: new trends and [15] B. Lucas, and T. Kanade, “An Iterative Image Registration
challenges", Multimedia Tools and Applications, 2010, pp. 739– Technique with an Application to Stereo Vision”, Proceedings
777. of 7th International Joint Conference on Artificial Intelligence
[2] K. A. Jain, S. C. Dass, K. Nandakumar, „Soft Biometric Traits (IJCAI), 1981,pp. 674-679.
for Personal Recognition Systems”, Proceedings of International [16] P. Felzenszwalb, D. Huttenlocher, “Pictorial structures for
Conference on Biometric Authentication, 2004, pp. 731-738. object recognition”, International Journal of Computer Vision,
[3] Y. Yacoob and L. S. Davis, “Detection and Analysis of Hair,” 61(1), 2005.
IEEE Transactions on Pattern Analysis and Machine [17] M. Everingham, J. Sivic, A. Zisserman, “Hello! My name is...
Intelligence, vol. 28, no. 7, July 2006, pp. 1164-1169. Buffy - Automatic naming of characters in TV video”, In
[4] C. Rousset, “Frequential and color analysis for hair mask Proceedings of the 17th British Machine Vision Conference
segmentation,” 15th IEEE International Conference on Image (BMVC2006), 2006, pp. 889-908.
Processing, Oct. 2008, pp. 2276 – 2279. [18] L.Vincent, “Granulometries, Segmentation, and Morpho-logical
[5] K-C. Lee, D. Anguelov, B. Sumengen, and S. B. Gokturk, Algorithms,” Lecture Notes for Morphological Image and
“Markov random field models for hair and face segmentation,” Signal Processing Workshop, September 1995, pp. 37- 41.
8th IEEE International Conference on Automatic Face & [19] A. Krupka, J. Prinosil, K. Riha, J. Minar, M. Dutta, “Hair
Gesture Recognition, Sept. 2008, pp. 1-6. Segmentation for Color Estimation in Surveillance Systems”. In
[6] U. Lipowezky, O. Mamo, and A. Cohen, “Using integrated color MMEDIA 2014, The Sixth International Conferences on
and texture features for automatic hair detection,” IEEE 25th Advances in Multimedia. 2014. pp. 102-107.
Convention of Electrical and Electronics Engineers in Israel, [20] D. J. Jobson, Z. Rahman, G. A. Woodell, “A Multiscale Retinex
Dec. 2008, pp. 51-55. for Bridging the Gap Between Color Im- ages and the Human
[7] D. Wang, S. Shan, W. Zeng, H. Zhang, and X. Chen, Observation of Scenes”, IEEE Transactions on Image
“A novel two-tier Bayesian based method for hair Processing, Vol. 6, No. 7, 1997, pp 965-976.

736 2015 International Conference on Green Computing and Internet of Things (ICGCIoT)

View publication stats

You might also like