0% found this document useful (0 votes)
36 views6 pages

CNN Face Recognition

The document outlines a Python program for face recognition using a Convolutional Neural Network (CNN). It details the steps for preparing image data, creating a CNN model, and testing the model on unseen images, including the structure of the CNN and the training process. The program also includes code snippets for image preprocessing, model training, and making predictions on test images.

Uploaded by

Kavitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views6 pages

CNN Face Recognition

The document outlines a Python program for face recognition using a Convolutional Neural Network (CNN). It details the steps for preparing image data, creating a CNN model, and testing the model on unseen images, including the structure of the CNN and the training process. The program also includes code snippets for image preprocessing, model training, and making predictions on test images.

Uploaded by

Kavitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

Ex.

No 3 : FACE RECOGNITION USING CNN

AIM:
To write a python program for face recognition using CNN.

ALGORITHM:

1. Getting Images for the case study

 Download the data required

 The data contains cropped face images of people divided into Training and testing.

 Train the CNN model using the images in the Training folder and then test the model
by using the unseen images from the testing folder, to check if the model is able to re-
cognise the face number of the unseen images or not.

2. Creating a mapping for index and face names

 class_index dictionary has face names as keys and the numeric mapping as values.

 We need to swap it, because the classifier model will return the answer as the numeric
mapping and we need to get the face-name out of it.

3. Creating the CNN face recognition model

Create a CNN model with


 2 hidden layers of convolution
 2 hidden layers of max pooling
 1 layer of flattening
 1 Hidden ANN layer
 1 output layer with 16-neurons (one for each face)

4. Testing the CNN classifier on unseen images

 Using any one of the images from the testing data folder, we can check if the
model is able to recognize the face.

 The model has predicted this face correctly! You can try for other faces and see if
it gets recognized. You can also add your own pics and train the model again.
PROGRAM

from keras.preprocessing.image import ImageDataGenerator

# Deep Learning CNN model to recognize face


'''This script uses a database of images and creates CNN model on top
of it to test
if the given image is recognized correctly or not'''

'''####### IMAGE PRE-PROCESSING for TRAINING and TESTING data


#######'''

# Specifying the folder where images are present


TrainingImagePath='C:\\Users\\Face-Images\\Face Images\\Final Training
Images'

from keras.preprocessing.image import ImageDataGenerator


# Understand more about ImageDataGenerator at below link
# https://blog.keras.io/building-powerful-image-classification-models-
using-very-little-data.html

# Defining pre-processing transformations on raw images of training


data
# These hyper parameters helps to generate slightly twisted versions
# of the original image, which leads to a better model, since it
learns
# on the good and bad mix of images
train_datagen = ImageDataGenerator(
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True)

# Defining pre-processing transformations on raw images of testing


data
# No transformations are done on the testing images
test_datagen = ImageDataGenerator()

# Generating the Training Data


training_set = train_datagen.flow_from_directory(
TrainingImagePath,
target_size=(64, 64),
batch_size=32,
class_mode='categorical')

# Generating the Testing Data


test_set = test_datagen.flow_from_directory(
TrainingImagePath,
target_size=(64, 64),
batch_size=32,
class_mode='categorical')

# Printing class labels for each face


test_set.class_indices

Found 1426 images belonging to 30 classes.


Found 1426 images belonging to 30 classes.

{'face1': 0,
'face10': 1,
'face11': 2,
'face12': 3,
'face13': 4,
'face14': 5,
'face15': 6,
'face16': 7,
'face17': 8,
'face18': 9,
'face19': 10,
'face2': 11,
'face20': 12,
'face21': 13,
'face22': 14,
'face23': 15,
'face24': 16,
'face25': 17,
'face26': 18,
'face27': 19,
'face28': 20,
'face29': 21,
'face3': 22,
'face30': 23,
'face4': 24,
'face5': 25,
'face6': 26,
'face7': 27,
'face8': 28,
'face9': 29}

'''############ Creating lookup table for all faces ############'''


# class_indices have the numeric tag for each face
TrainClasses=training_set.class_indices

# Storing the face and the numeric tag for future reference
ResultMap={}
for faceValue,faceName in
zip(TrainClasses.values(),TrainClasses.keys()):
ResultMap[faceValue]=faceName

# Saving the face map for future reference


import pickle
with open("ResultsMap.pkl", 'wb') as fileWriteStream:
pickle.dump(ResultMap, fileWriteStream)

# The model will give answer as a numeric tag


# This mapping will help to get the corresponding face name for it
print("Mapping of Face and its ID",ResultMap)

# The number of neurons for the output layer is equal to the number of
faces
OutputNeurons=len(ResultMap)
print('\n The Number of output neurons: ', OutputNeurons)

Mapping of Face and its ID {0: 'face1', 1: 'face10', 2: 'face11', 3:


'face12', 4: 'face13', 5: 'face14', 6: 'face15', 7: 'face16', 8:
'face17', 9: 'face18', 10: 'face19', 11: 'face2', 12: 'face20', 13:
'face21', 14: 'face22', 15: 'face23', 16: 'face24', 17: 'face25', 18:
'face26', 19: 'face27', 20: 'face28', 21: 'face29', 22: 'face3', 23:
'face30', 24: 'face4', 25: 'face5', 26: 'face6', 27: 'face7', 28:
'face8', 29: 'face9'}

The Number of output neurons: 30

'''######################## Create CNN deep learning model


########################'''
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPool2D
from keras.layers import Flatten
from keras.layers import Dense

'''Initializing the Convolutional Neural Network'''


classifier= Sequential()

''' STEP--1 Convolution


# Adding the first layer of CNN
# we are using the format (64,64,3) because we are using TensorFlow
backend
# It means 3 matrix of size (64X64) pixels representing Red, Green and
Blue components of pixels
'''
classifier.add(Convolution2D(32, kernel_size=(5, 5), strides=(1, 1),
input_shape=(64,64,3), activation='relu'))

'''# STEP--2 MAX Pooling'''


classifier.add(MaxPool2D(pool_size=(2,2)))
'''############## ADDITIONAL LAYER of CONVOLUTION for better accuracy
#################'''
classifier.add(Convolution2D(64, kernel_size=(5, 5), strides=(1, 1),
activation='relu'))

classifier.add(MaxPool2D(pool_size=(2,2)))

'''# STEP--3 FLattening'''


classifier.add(Flatten())

'''# STEP--4 Fully Connected Neural Network'''


classifier.add(Dense(64, activation='relu'))

classifier.add(Dense(OutputNeurons, activation='softmax'))

'''# Compiling the CNN'''


#classifier.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
classifier.compile(loss='categorical_crossentropy', optimizer =
'adam', metrics=["accuracy"])

###########################################################
import time
# Measuring the time taken by the model to train
StartTime=time.time()

# Starting the model training


classifier.fit_generator(
training_set,
steps_per_epoch=30,
epochs=10,
validation_data=test_set,
validation_steps=10)

EndTime=time.time()
print("###### Total Time Taken: ", round((EndTime-StartTime)/60),
'Minutes ######')

C:\Users\SHRAVAN SHAAM\AppData\Local\Temp\
ipykernel_14320\1702966363.py:44: UserWarning: `Model.fit_generator`
is deprecated and will be removed in a future version. Please use
`Model.fit`, which supports generators.
classifier.fit_generator(

Epoch 1/10
30/30 [==============================] - 17s 523ms/step - loss:
14.6886 - accuracy: 0.0550 - val_loss: 3.1126 - val_accuracy: 0.1156
Epoch 2/10
30/30 [==============================] - 16s 522ms/step - loss: 3.0547
- accuracy: 0.1025 - val_loss: 2.7244 - val_accuracy: 0.2000
Epoch 3/10
30/30 [==============================] - 16s 529ms/step - loss: 2.7650
- accuracy: 0.1649 - val_loss: 2.6866 - val_accuracy: 0.1844
Epoch 4/10
30/30 [==============================] - 17s 563ms/step - loss: 2.5384
- accuracy: 0.2302 - val_loss: 2.1881 - val_accuracy: 0.3250
Epoch 5/10
30/30 [==============================] - 18s 588ms/step - loss: 2.2510
- accuracy: 0.3076 - val_loss: 2.1322 - val_accuracy: 0.3594
Epoch 6/10
30/30 [==============================] - 16s 531ms/step - loss: 2.1064
- accuracy: 0.3344 - val_loss: 2.0524 - val_accuracy: 0.3688
Epoch 7/10
25/30 [========================>.....] - ETA: 2s - loss: 1.8626 -
accuracy: 0.4262

'''########### Making single predictions ###########'''


import numpy as np
from keras.utils import load_img,img_to_array

ImagePath='C:\\Users\\Face-Images\\Face Images\\Final Testing Images\\


face4\\3face4.jpg'
test_image=load_img(ImagePath,target_size=(64, 64))
test_image=img_to_array(test_image)

test_image=np.expand_dims(test_image,axis=0)

result=classifier.predict(test_image,verbose=0)
#print(training_set.class_indices)

print('####'*10)
print('Prediction is: ',ResultMap[np.argmax(result)])

########################################
Prediction is: face8

You might also like