19CSE456 Neural Network and
Deep Learning Laboratory
List of Experiments
Week # Experiment Title
1 Introduction to the lab and Implementation of a simple Perceptron (Hardcoding)
2 Implementation of Perceptron for Logic Gates (Hardcoding, Sklearn, TF)
Implementation of Multilayer Perceptron for XOR Gate and other classification problems with
3
ML toy datasets (Hardcoding & TF)
Implementation of MLP for Image Classification with MNIST dataset
4
(Hardcoding & TF)
5 Activation Functions, Loss Functions, Optimizers (Hardcoding & TF)
6 Lab Evaluation 1 (based on topics covered from w1 to w5)
7 Convolution Neural Networks for Toy Datasets (MNIST & CIFAR)
8 Convolution Neural Networks for Image Classification (Oxford Pets, Tiny ImageNet, etc.)
9 Recurrent Neural Networks for Sentiment Analysis with IMDB Movie Reviews
10 Long Short Term Memory for Stock Prices (Yahoo Finance API)
List of Experiments contd.
Week # Experiment Title
11 Implementation of Autoencoders and Denoising Autoencoders (MNIST/CIFAR)
12 Boltzmann Machines (MNIST/CIFAR)
13 Restricted Boltzmann Machines (MNIST/CIFAR)
14 Hopfield Neural Networks (MNIST/CIFAR)
15 Lab Evaluation 2 (based on CNN, RNN, LSTM, and AEs)
16 Case Study Review (Phase 1)
17 Case Study Review (Phase 1)
Perceptron
• A single-layer perceptron is the basic unit of a neural network
• A perceptron consists of input values, weights and a bias, a weighted
sum and activation function
𝑥0
𝑤0 = 1
𝑥1 𝑤1
4
𝑤2
𝑥2 𝑧 = 𝑥𝑖 𝑤𝑖 𝑦 ′ = 𝜑(𝑧) 𝑦′
𝑤3 𝑖=0
𝑥3
𝑤4
𝑥4
MLP
Forward Pass
1 1 1
3
1 𝑤01
𝑤01 2
𝑤01 𝑂13
3 𝜃13
2 𝑤02
𝑤02 3
1 1 𝑤11
𝑤11 𝑤02 2
𝑤11
𝑥1 𝜃11 𝜃12 3
𝑤12
1
𝑤12 2
𝑤12
3
𝑤21 𝑂23
3
𝑤22 𝜃23
1 2
𝑤21 𝑤21
1 2
𝑤22 𝑤22
𝑥2 𝜃21 𝜃22
0 1 2 3
Back Propagation
MNIST Dataset
The MNIST dataset (Modified National Institute of Standards and Technology) is
one of the most well-known datasets in the field of machine learning and computer
vision
• The dataset consists of 70,000 grayscale images of handwritten digits from 0 to 9
• Each image is 28x28 pixels, providing a total of 784 features per image
MLP for Image Classification
Load and Model Model Model
Preprocess the Data Creation Training Evaluation
def load_and_preprocess_data(): Normalizes the pixel values of the
# Load MNIST dataset images to be in the range [0, 1] and
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() convert them to floating-point
numbers
# Normalize pixel values
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0 Reshapes each 28 × 28 pixel image
into a 1D array of length 784 (28*28)
# Reshape images to 1D arrays
x_train = x_train.reshape(-1, 28*28)
x_test = x_test.reshape(-1, 28*28) Allows the number of images to be
inferred automatically
# One-hot encode labels
y_train = tf.keras.utils.to_categorical(y_train, 10) Converts the integer labels (0-9) into
y_test = tf.keras.utils.to_categorical(y_test, 10) one-hot encoded vectors
return (x_train, y_train), (x_test, y_test)
MLP for Image Classification
Load and Model Model Model
Preprocess the Data Creation Training Evaluation
def create_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(512, activation='relu', input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
from tensorflow.keras.utils import plot_model
tf.keras.utils.plot_model(
model,
to_file="model.png",
show_shapes=True,
MLP Visualization
show_layer_names=True,
rankdir="TB",
expand_nested=False,
dpi=96,
)
from IPython.display import Image
Image('model.png')
The plot_model function is used to
visualize the architecture of a Keras
Sets the direction of the model and save it as an image file
graph (“TB” / ”LR”)
Displays the image
"model.png" within the
IPython environment
MLP for Image Classification
Load and Model Model Model
Preprocess the Data Creation Training Evaluation
During each epoch, the model will
update its weights after processing
128 samples (batch size)
def train(model, x_train, y_train, x_test, y_test): The model will train for 20 epochs,
# Train the model meaning it will process the entire
history = model.fit(x_train, y_train, training dataset 20 times
batch_size=128,
epochs=20,
validation_split=0.2,
𝐵1 𝐵2 𝐵3 𝐵4 𝐵𝑛
verbose=1)
1 1 1 1 1
2 2 2 2 2
3 3 3 3
⋯ 3
... ... ... ... ...
128 128 128 128 128
MLP for Image Classification
MLP for Image Classification
Week 4 Exercises
1.MLP Classifier for MNIST Handwritten Digits
Objective: To build, train, evaluate, and visualize the performance of an MLP image
classifier using the MNIST dataset.
2. MLP Classifier for CIFAR-10 Dataset
Objective: To build, train, evaluate, and visualize the performance of an MLP image
classifier using the CIFAR-10 dataset.
Pushing Your Code GitHub Repository
Clone the Repository
git clone https://github.com/YourUsername/MLP_SKL_TF_W4.git
Navigate to the Repository
cd MLP_SKL_TF_W4
Create a New Branch
git checkout -b <<Your_Roll_No>>
Add Your Code Folder
mkdir <<MyCodeFolder>>
cd <<MyCodeFolder>>
Pushing Your Code GitHub Repository
Add and Commit Changes
git add MyCodeFolder
git commit -m "Add MyCodeFolder"
Push Changes to the Repository
git push origin add-new-code-folder
Create a Pull Request