Expt 1: Demonstrate the process of creating a simple feed-forward neural network for the Abalone
dataset using Tensorflow and keras libraries.
Aim: To Demonstrate the process of creating a simple feed-forward neural network for the Abalone
dataset using Tensorflow and keras libraries.
The Abalone dataset contains physical measurements of abalones, a type of sea snail, including
attributes like length, diameter, height, and weight, along with their age determined by counting shell
rings. Key steps in analysis typically include data integrity verification, exploratory data analysis, and
applying machine learning models for age prediction. Abalone Dataset Overview
The Abalone dataset consists of 4,177 entries with 9 attributes, including:
Sex: Categorical variable indicating male (M), female (F), or infant (I).
Physical Measurements: Length, diameter, height, and various
weight measurements (whole, shucked, viscera, shell).
Rings: Integer target variable representing the age of the abalone, determined
by counting the rings on its shell.
Steps for Analysis
1. Data Loading and Cleaning:
Load the dataset using appropriate libraries (e.g., pandas in Python).
Check for missing values and correct any data entry errors, such as unrealistic
height values.
2. Exploratory Data Analysis (EDA):
Visualize distributions of physical measurements using histograms and box plots.
Analyze correlations between different attributes to identify relationships.
3. Feature Engineering:
Create new features, such as volume (Length × Diameter × Height) and total
weight metrics.
Consider transformations (e.g., logarithmic) to normalize skewed distributions.
4. Modeling:
Split the dataset into training and testing sets.
Apply regression models (e.g., linear regression, XGBoost) to predict the number
of rings based on physical measurements.
Evaluate model performance using metrics like R-squared and Root Mean
Squared Error (RMSE).
5. Outlier Detection:
1
Identify and remove outliers that may skew results, using methods like
residual analysis from regression models.
6. Final Evaluation:
Assess the final model's performance on the test set and refine as necessary.
Consider cross-validation techniques to ensure robustness of the model.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print ("The first task done by P.Madhurima 22A81A61A8\n\n")
output:
Snippet:
url = "https://archive.ics.uci.edu/ml/machine-learning-
databases/abalone/abalone.data"
df = pd.read_csv(url, header=None)
print("df = pd.read_csv(url, header=None)")
print("Dataset Preview (df.head()):")
print(df.head())
Output:
2
Snippet:
df1 = pd.read_csv(url)
print("df1 = pd.read_csv(url)")
print("Dataset Preview (df1.head):")
print(df1.head())
Output:
Snippet:
columns=["Sex","Length","Diameter","Height","Whole
weight","Shucked weight","Viscera weight","Shell weight","Rings"]
df.columns=columns
print("Dataset Preview:")
df.head()
Output:
Snippet:
print("\nDataset Shape (Rows, Columns):")
print(df.shape)
Output:
3
Snippet:
print("\nDataset Info:")
df.info()
Output:
Snippet:
print("\nStatistical Summary of Dataset:")
print(df.describe().T)
Output:
Snippet:
print("\n Number of Missing Values in Each Column :")
print(df.isnull().sum())
4
Output:
Snippet:
X=df['Sex'].value_counts()
labels=X.index #Unique categories in 'Sex'
values=X.values #Counts of each category
plt.pie(values,labels=labels,autopct='%1.1f%%',startangle=90)
plt.title('Distribution of Sex done by P.Madhurima')
plt.show()
Output:
Snippet:
print("\nMean Values of Features Grouped by 'Sex':")
print(df.groupby('Sex').mean())
5
Output:
Snippet:
df=pd.get_dummies(df,columns=['Sex'],drop_first=True)
X=df.drop('Rings',axis=1)
y=df['Rings']
scaler=StandardScaler()
X_scaled=scaler.fit_transform(X)
y=y/y.max()
X_train,X_test,y_train,y_test=train_test_split(X_scaled,y,test_size=0.2,ran
dom_state=42)
model=Sequential([Dense(64,input_dim=X_train.shape[1],activation='relu),
Dense(32,activation='linear'),
Dense(1,activation= 'linear')])
model.compile(optimizer='adam',loss='mse',metrics=['mae'])
history=model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=50
,batch_size=32)
Output:
6
Snippet:
test_loss,test_mae=model.evaluate(X_test,y_test)
print(f'Test Loss: {test_loss}, Test MAE: {test_mae}')
Output:
Snippet:
plt.figure(figsize=(12,6))
plt.plot(history.history['loss'],label='Training Loss')
plt.plot(history.history['val_loss'],label='Validation Loss')
plt.title('Model Loss,Demo by P.Madhurima in Lab 26-12-24')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Output:
7
Snippet:
plt.figure(figsize=(12,6))
plt.plot(history.history['mae'],label='Training MAE')
plt.plot(history.history['val_mae'],label='Validation MAE')
plt.title('Model MAE by Madhurima')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
Output:
Snippet:
y_pred=model.predict(X_test)
y_pred_original=y_pred.flatten()*df['Rings'].max()
y_test_original=y_test*df['Rings'].max()
for i in range(10):
print(f'Actual Rings: {y_test_original.iloc[i]:.2f}, Predicted
Rings: {y_pred_original[i]:.2f}')
Output:
8
Snippet:
plt.figure(figsize=(10,6))
plt.scatter(range(len(y_test_original)),y_test_original,label='Actual',
alpha=0.7)
plt.scatter(range(len(y_pred_original)),y_pred_original,label='Predicte
d',alpha=0.7)
plt.legend()
plt.title('Actual vs Predicted values by Madhurima')
plt.xlabel('Actual Rings')
plt.ylabel('Predicted Rings')
Output:
9
Expt 2: Demonstrate the process of saving and loading weights of the neural network constructed
in experiment 1 manually and with checkpoints.
What is a Callback?
In the context of machine learning and deep learning, a callback is a function or a set of functions
that are executed at certain stages of the training process. Callbacks allow you to customize the
behavior of your training loop, enabling you to monitor the training process, modify the training
parameters, or save the model at specific intervals.
Common use cases for callbacks include:
Early Stopping: Stop training when a monitored metric has stopped improving.
Learning Rate Scheduling: Adjust the learning rate based on the epoch or
performance metrics.
Model Checkpointing: Save the model at certain intervals or when it achieves a new
best performance.
Logging: Record metrics for visualization or analysis.
Why Use Checkpoints?
Model checkpoints are a way to save the state of a model during training. They are particularly
useful for several reasons:
1. Preventing Data Loss: If training is interrupted (due to a crash, power failure, etc.), you
can resume from the last saved checkpoint instead of starting over.
2. Best Model Preservation: By saving the model at various points, you can keep the
best- performing version based on validation metrics, ensuring that you do not lose
the best model due to overfitting or other issues.
3. Experimentation: Checkpoints allow you to experiment with different training strategies
without losing progress. You can revert to a previous state if a new approach does not
yield better results.
4. Long Training Times: For models that take a long time to train, checkpoints allow you to
save progress and avoid losing hours of computation.
Model Checkpoints Basics
A model checkpoint typically involves:
Saving the Model Weights: The parameters of the model are saved to disk.
Saving the Optimizer State: The state of the optimizer is also saved, allowing you to
resume training with the same learning rate and momentum.
Saving Training Metadata: Information such as the current epoch, loss, and accuracy can
be saved to help resume training effectively.
Practical Walkthrough
Here’s a simple example of how to implement model checkpoints using TensorFlow/Keras:
10
Step 1: Import Libraries
Step 2: Define Your Model
Step 3: Set Up Model Checkpointing
Step 4: Train the Model with Checkpoints
Step 5: Load the Best Model
Snippet:
print("Demo by P.Madhurima with saved weights and check points")
import os
from tensorflow.keras.callbacks import ModelCheckpoint
Output:
Demo by P.Madhurima with saved weights and check points
Snippet:
model.save_weights('model_weights_manual.weights.h5')
print("Model weights saved manually to 'model_weights_manual.weights.h5'")
Output:
Snippet:
model = Sequential([
Dense(64, input_dim=X_train.shape[1], activation='relu'),
Dense(32, activation='linear')])
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae'])
model.load_weights('model_weights_manual.weights.h5')
print("Model weights loaded successfully.")
test_loss, test_mae = model.evaluate(X_test, y_test)
print(f"Test Loss after loading weights: {test_loss}")
print(f"Test MAE after loading weights: {test_mae}")
11
Output:
Snippet:
checkpoint_dir = './checkpoints'
os.makedirs(checkpoint_dir, exist_ok=True)
checkpoint_path = os.path.join(checkpoint_dir,
'model_checkpoint.weights.h5')
checkpoint_callback = ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
save_best_only=True,
monitor='val_loss',
mode='min',
verbose=1 )
print("\nTraining the model with checkpointing")
history_with_checkpoint = model.fit(
X_train, y_train,
validation_data=(X_test, y_test),
epochs=10,
batch_size=32,
callbacks=[checkpoint_callback])
Output:
Snippet:
model.load_weights(checkpoint_path)
print("Model weights loaded successfully from the checkpoint.")
Output:
12
Snippet:
test_loss_checkpoint, test_mae_checkpoint = model.evaluate(X_test,
y_test)
print(f"Test Loss after loading checkpoint weights:
{test_loss_checkpoint}")
print(f"Test MAE after loading checkpoint weights:
{test_mae_checkpoint}")
Output:
13
Expt 3: Construct a regression model for predicting the fuel efficiency of cars using the MPG
dataset
Snippet:
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-
mpg.data'
columns = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
data = pd.read_csv(url, names=columns, na_values='?', comment='\t', sep=' ',
skipinitialspace=True)
print("Demo by P.Madhurima- 22A81A61A8")
Output:
Demo by P.Madhurima- 22A81A61A8
Snippet:
Output:
Snippet:
14
print("\n Missing values in each column")
print(data.isnull().sum())
Output:
Snippet:
data=data.dropna()
data['Origin']=data['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
data=pd.get_dumms(data,columns=['Origin'],drop_first=True)
X = data.drop('MPG', axis=1)
y = data['MPG']
scaler=StandardScaler()
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y,
test_size=0.2,random_state=42)
print("\nTraining set shape:")
print("X_train.shape", X_train.shape)
print("y_train.shape", y_train.shape)
print("\nTest set shape:")
print("X_test.shape", X_test.shape)
print("y_test.shape", y_test.shape)
Output:
15
Snippet:
model = Sequential([Dense(64,activation='relu',input_dim=X_train.shape[1]),
Dense(32, activation='relu'),
Dense(1, activation='linear'),
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(X_train, y_train, validation_split=0.2,
epochs=100,batch_size=32, verbose=1)
Output:
Snippet:
test_loss, test_mae = model.evaluate(X_test, y_test)
print(f"\nTest Loss (MSE): {test_loss}")
print(f"Test MAE: {test_mae}")
Output:
Snippet:
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss By P.Madhurima')
plt.xlabel('Epochs')
plt.ylabel('Loss (MSE)')
plt.legend()
plt.show()
16
Output:
Snippet:
plt.figure(figsize=(10, 5))
plt.plot(history.history['mae'], label='Training MAE')
plt.plot(history.history['val_mae'], label='Validation MAE')
plt.title('Model MAE (By Madhurima)')
plt.xlabel('Epochs')
plt.ylabel('Mean Absolute Error (MPG)')
plt.legend()
plt.show()
Output:
17
Snippet:
y_pred = model.predict(X_test)
plt.figure(figsize=(10, 5))
plt.scatter(range(len(y_test)), y_test, label='Actual MPG', alpha=0.7)
plt.scatter(range(len(y_pred)), y_pred, label='Predicted MPG',
alpha=0.7)
plt.title('Actual vs Predicted MPG (By P.Madhurima)')
plt.xlabel('Sample Index')
plt.ylabel('MPG')
plt.legend()
plt.show()
Output:
18
19
Expt 4: Develop a feed-forward neural network on the MNIST-Handwritten digits dataset.
Snippet:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten,
Dense, Dropout
from tensorflow.keras.datasets import fashion_mnist
import matplotlib.pyplot as plt
print("Expt 4 : done by P.Madhurima")
Output:
Expt 4 : done by P.Madhurima
Snippet:
print("Training data shape:", X_train.shape)
print("Testing data shape:", X_test.shape)
Output:
Snippet:
plt.imshow(X_train[1], cmap='gray')
plt.title(f"Label: {y_train[1]}")
plt.show()
print(X_train[1])
Output:
20
Snippet:
X_train = X_train / 255.0
X_test = X_test / 255.0
X_train = X_train.reshape(-1, 28, 28, 1)
X_test = X_test.reshape(-1, 28, 28, 1)
Output:
Snippet:
y_train=tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test=tf.keras.utils.to_categorical(y_test, num_classes=10)
model=Sequential([Dense(128,activation='relu',input_dim=28*28),
Dense( 64,activation='relu'),
Dense(10,activation='softmax')])
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['ac
curacy'])
history = model.fit(X_train, y_train, validation_split=0.2,
epochs=10,batch_size=32, verbose=1)
21
Output:
Snippet:
test_loss, test_accuracy = model.evaluate(X_test, y_test) print(f"\
nTest Loss: {test_loss}")
print(f"Test Accuracy: {test_accuracy}")
Output:
Snippet:
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy by Madhurima')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Output:
22
Snippet:
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss by Madhurima')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Output:
23
Snippet:
predictions = model.predict(X_test[:10])
for i in range(10):
actual_label = tf.argmax(y_test[i]).numpy()
predicted_label = tf.argmax(predictions[i]).numpy()
print(f"Actual Label: {actual_label}, Predicted Label: {predicted_label}")
plt.imshow(X_test[i].reshape(28, 28), cmap='gray')
plt.title(f"Actual: {actual_label}, Predicted: {predicted_label} [Demo by
P.Madhurima")
plt.show()
Output:
24
Expt 5. Develop a convolutional neural network on the Fashion-MNIST dataset.
Fashion-MNIST is a dataset consisting of 70,000 grayscale images of fashion products, organized into
10 categories, with 7,000 images per category. It includes a training set of 60,000 examples and a test
set of 10,000 examples, making it a popular alternative to the original MNIST dataset for machine
learning tasks. ### Key Features of Fashion-MNIST Dataset:
Image Size: Each image is 28x28 pixels in size and is represented in grayscale.
Snippet:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten,
Dense, Dropout
from tensorflow.keras.datasets import fashion_mnist
import matplotlib.pyplot as plt
print("Exp-05 [Demo by P.Madhurima(22A81A61A8)]")
Output:
Exp-05 [Demo by P.Madhurima(22A81A61A8)]
Snippet:
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
print("Training data shape:", X_train.shape)
print("Testing data shape:", X_test.shape)
Output:
Snippet:
plt.imshow(X_train[0], cmap='gray')
plt.title(f"Label: {y_train[0]} [Demo by P.Madhurima(22A81A61A8)]")
plt.show()
Output:
25
Snippet:
X_train = X_train / 255.0
X_test = X_test / 255.0
X_train = X_train.reshape(-1, 28, 28, 1)
X_test = X_test.reshape(-1, 28, 28, 1)
model = Sequential([
Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D(pool_size=(2,2)),
Conv2D(64, kernel_size=(3,3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_split=0.2, epochs=10,
batch_size=32, verbose=1)
Output:
Snippet:
test_loss, test_accuracy = model.evaluate(X_test, y_test) print(f"\
nTest Loss: {test_loss}")
print(f"Test Accuracy: {test_accuracy}")
26
Output:
Snippet:
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy [Demo by
P.Madhurima(22A81A61A8)]')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Output:
Snippet:
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss [Demo by
P.Madhurima(22A81A61A8)]')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Output:
27
Snippet:
y_pred = model.predict(X_test[:10])
for i in range(10):
actual_label = y_test[i]
predicted_label = tf.argmax(y_pred[i]).numpy()
plt.imshow(X_test[i].reshape(28, 28), cmap='gray')
plt.title(f"Actual: {actual_label}, Predicted: {predicted_label} [Demo by
P.Madhurima(22A81A61A8)]")
plt.show()
Output:
28
Expt 6. Develop and train the VGG-16 network to classify images of Cats & Dogs.
29
VGG-16 is a convolutional neural network architecture designed for image classification, consisting of
16 layers with weights. It employs a series of convolutional layers followed by max-pooling layers to
extract features from images, making it effective for tasks like classifying cat and dog images. ###
Overview of VGG-16
Architecture: VGG-16 is characterized by its deep architecture, which includes 16 layers
with learnable weights. The model primarily uses small convolutional filters (3x3) and a
consistent max-pooling strategy (2x2) to reduce spatial dimensions.
Input Size: The standard input size for VGG-16 is 224x224 pixels, which allows the model
to process images effectively while maintaining a manageable number of parameters.
Key Components
Convolutional Layers: VGG-16 employs multiple convolutional layers to capture spatial
hierarchies in images. Each convolutional layer is followed by a ReLU activation
function, which introduces non-linearity into the model.
Max-Pooling Layers: After a set of convolutional layers, max-pooling layers are used to
down- sample the feature maps, reducing their dimensionality and helping to retain the
most
important features.
Fully Connected Layers: At the end of the convolutional and pooling layers, VGG-16
includes fully connected layers that serve to classify the extracted features. The final layer
typically uses a softmax activation function to output probabilities for each class.
Application in Cat and Dog Classification
Dataset Preparation: For cat and dog classification, the dataset is usually organized into
two folders, one for each class. The images are resized to 224x224 pixels to match the
input size of VGG-16.
Transfer Learning: VGG-16 can be used as a pre-trained model, leveraging weights
learned from a large dataset (like ImageNet) to improve performance on the cat and dog
classification task. This approach reduces training time and enhances accuracy, especially
when the dataset is limited.
Training Process: The model is compiled with a loss function suitable for binary classification
(e.g., binary cross-entropy) and an optimizer like Adam. The training process involves
feeding the model batches of images and adjusting the weights based on the loss calculated
from predictions.
VGG-16
The VGG-16 model is a convolutional neural network (CNN) architecture that was
proposed by the Visual Geometry Group (VGG) at the University of Oxford. It is
characterized by its depth, consisting of 16 layers, including 13 convolutional layers and 3
fully connected layers. VGG-16 is renowned for its simplicity and effectiveness, as well as
its ability to achieve strong performance on various computer vision tasks, including image
classification and object recognition. The model’s architecture features a stack of
convolutional layers followed by max-pooling layers, with progressively increasing depth.
This design enables the model to learn intricate hierarchical representations of visual
features, leading to robust and accurate predictions. Despite its simplicity compared to
30
more recent architectures, VGG-16 remains a popular choice for many deep learning
applications due to its versatility and excellent performance.
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is an annual competition
in computer vision where teams tackle tasks including object localization and image
classification. VGG16, proposed by Karen Simonyan and Andrew Zisserman in 2014,
achieved top ranks in both tasks, detecting objects from 200 classes and classifying images
into 1000 categories.
Snippet:
import tensorflow as tf
import cv2
import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.utils import get_file
print("Exp-06->VGG-16 [Demo by P.Madhurima(22A81A61A8)]")
31
Output :
Exp-06->VGG-16 [Demo by P.Madhurima(22A81A61A8)]
Snippet :
from google.colab import drive
drive.mount('/content/drive')
import os
import zipfile
google_drive_path = "/content/drive/MyDrive/DL Lab/Exp
6/cats_and_dogs_filtered (2).zip"
extract_path = "/content/cats_and_dogs_filtered"
if not os.path.exists(google_drive_path):
print("Dataset file not found! Check the path in Google Drive.")
else:
print("Dataset found in Google Drive!")
Output:
Snippet:
if not os.path.exists(extract_path):
print("Extracting dataset... Please wait.")
with zipfile.ZipFile(google_drive_path, 'r') as
zip_ref:zip_ref.extractall("/content")
print("Dataset extracted successfully!")
else:
print("Dataset already extracted.")
Output:
Snippet:
train_dir = os.path.join(extract_path, 'train')
validation_dir = os.path.join(extract_path, 'validation')
if not os.path.exists(train_dir) or not os.path.exists(validation_dir):
print(" Training or validation directories are missing!")
else:
print(" Training and validation directories exist.")
print(" Training folder contents:", os.listdir(train_dir))
32
print(" Validation folder contents:", os.listdir(validation_dir))
Output:
Snippet:
import cv2
import matplotlib.pyplot as plt
# Define a sample image path (change 'cats' to 'dogs' if needed)
sample_image_path = os.path.join(train_dir, 'cats',
os.listdir(os.path.join(train_dir, 'cats'))[0])
# Load the image using OpenCV
img = cv2.imread(sample_image_path)
if img is None:
print("\n\nImage not loaded! Check the file path.")
else:
print("\n\nImage loaded successfully!") print("\n\
nImage Shape:", img.shape)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img_rgb, (224, 224))
plt.imshow(img_resized)
plt.axis("off")
plt.title("Sample Cat Image (Resized to 224x224) [Demo by
P.Madhurima(22A81A61A8)]")
plt.show()
Output:
Image loaded successfully!
Image Shape: (374, 500, 3)
33
Snippet:
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1.0/255.0,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary'
)
validation_datagen =
tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(224, 224),
batch_size=32,
class_mode='binary'
)
Output:
34
Snippet:
base_model = tf.keras.applications.VGG16(weights='imagenet',
include_top=False, input_shape=(224, 224, 3))
base_model.trainable = False
base_model.summary()
Output:
Snippet:
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
35
Output:
Snippet:
history = model.fit(
train_generator,
validation_data=validation_generator,
epochs=10,
verbose=1
)
Output:
Snippet:
test_loss, test_acc = model.evaluate(validation_generator)
print(f"\nModel Test Accuracy: {test_acc:.2f}")
Output:
Snippet:
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy [Demo by
P.Madhurima(22A81A61A8)]' )
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
36
plt.legend()
plt.show()
Output:
Snippet:
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss by Madhurima')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Output:
37