0% found this document useful (0 votes)
13 views17 pages

Week 3

The document outlines a 16-week laboratory course on Neural Networks and Deep Learning, detailing weekly experiments that include implementing perceptrons, multilayer perceptrons (MLPs), convolutional neural networks, and recurrent neural networks. It covers practical applications using datasets like MNIST and IMDB, as well as tools like Scikit-Learn and TensorFlow. The course includes evaluations and case study reviews to assess student understanding and application of the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views17 pages

Week 3

The document outlines a 16-week laboratory course on Neural Networks and Deep Learning, detailing weekly experiments that include implementing perceptrons, multilayer perceptrons (MLPs), convolutional neural networks, and recurrent neural networks. It covers practical applications using datasets like MNIST and IMDB, as well as tools like Scikit-Learn and TensorFlow. The course includes evaluations and case study reviews to assess student understanding and application of the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

19CSE456 Neural Network and

Deep Learning Laboratory


List of Experiments
Week # Experiment Title

1 Introduction to the lab and Implementation of a simple Perceptron (Hardcoding)

2 Implementation of Perceptron for Logic Gates (Hardcoding, Sklearn, TF)


Implementation of Multilayer Perceptron for XOR Gate and other classification problems with
3
ML toy datasets (Hardcoding & TF)
Implementation of MLP for Image Classification with MNIST dataset
4
(Hardcoding & TF)
5 Activation Functions, Loss Functions, Optimizers (Hardcoding & TF)

6 Lab Evaluation 1 (based on topics covered from w1 to w5)

7 Convolution Neural Networks for Toy Datasets (MNIST & CIFAR)

8 Convolution Neural Networks for Image Classification (Oxford Pets, Tiny ImageNet, etc.)

9 Recurrent Neural Networks for Sentiment Analysis with IMDB Movie Reviews

10 Long Short Term Memory for Stock Prices (Yahoo Finance API)
List of Experiments contd.
Week # Experiment Title

11 Implementation of Autoencoders and Denoising Autoencoders (MNIST/CIFAR)

12 Boltzmann Machines (MNIST/CIFAR)

13 Restricted Boltzmann Machines (MNIST/CIFAR)

14 Hopfield Neural Networks (MNIST/CIFAR)

15 Lab Evaluation 2 (based on CNN, RNN, LSTM, and AEs)

16 Case Study Review (Phase 1)

17 Case Study Review (Phase 1)


Perceptron
• A single-layer perceptron is the basic unit of a neural network
• A perceptron consists of input values, weights and a bias, a weighted
sum and activation function

𝑥0
𝑤0 = 1

𝑥1 𝑤1
4
𝑤2
𝑥2 𝑧= 𝑥𝑖 𝑤𝑖 𝑦 ′ = 𝜑(𝑧) 𝑦′
𝑖=0
𝑤3
𝑥3
𝑤4

𝑥4
Multilayer Perceptron (MLP)
Hidden Layers

Input Layer
Output Layer

1 1 1
3
1 𝑤01
𝑤01 2
𝑤01 𝑂13
3 𝜃13
2 𝑤02
𝑤02 3
1 1 𝑤11
𝑤11 𝑤02 2
𝑤11
𝑥1 𝜃11 𝜃12 3
𝑤12
1 2
𝑤12 𝑤12
3
𝑤21 𝜃23 𝑂23
3
1 2
𝑤22
𝑤21 𝑤21
1 2
𝑤22 𝑤22
𝑥2 𝜃21 𝜃22
0 1 2 3
MLP’s Feedforward Phase

1 1 1
3
1 𝑤01
𝑤01 2
𝑤01 𝑂13
3 𝜃13
2 𝑤02
𝑤02 3
1 1 𝑤11
𝑤11 𝑤02 2
𝑤11
𝑥1 𝜃11 𝜃12 3
𝑤12
1 2
𝑤12 𝑤12
3
𝑤21 𝑂23
3
𝑤22 𝜃23
1 2
𝑤21 𝑤21
1 2
𝑤22 𝑤22
𝑥2 𝜃21 𝜃22
0 1 2 3

Forward Pass
MLP Feedforward Algorithm
Step 1: Initialize the weights and biases for all layers in the network

Step 2: Assign the input features to the input layer of the network

Step 3: For each hidden layer and the output layer:

Compute the weighted sum of inputs for each neuron in the layer:

𝜃𝑗 = 𝑤𝑖𝑗 ∙ 𝑥𝑖 + 𝑏𝑗
𝑖

Apply the activation function to the weighted sum to get the output (activation) of each neuron:

𝑂𝑗 = 𝜑 𝜃𝑗

Step 4: Compute the final outputs of the network using the activations of the last hidden layer
and the output layer's weights and biases.
MLP in Popular ML/DL Libraries

• Moderate, requires
• Beginner-friendly, dynamic
• Beginner-friendly, simple understanding of
graphs
API computational graphs
• High flexibility, dynamic
• Limited flexibility, • High flexibility, custom
computation
predefined models layers and models
• Good for research and
• Good for small to medium- • Optimized for large-scale,
prototyping
sized datasets production-ready models
• Less optimized for
• Easy to deploy, but less • Optimized for production,
production, but easy to
optimized for production supports distributed
deploy
training
MLP in Scikit Learn
Load and
Model Model Model
Preprocess the
Creation Training Evaluation
Data

The MLP has two hidden layers: the first


layer has 32 neurons, and the second
mlp = MLPClassifier( layer has 16 neurons
hidden_layer_sizes=(32, 16),
The activation function for the hidden
max_iter=1000,
layers
activation='relu',
solver='adam', The optimization algorithm used for
random_state=42, weight optimization
learning_rate_init=0.001,
batch_size=32, The initial learning rate for weight
early_stopping=True, updates
validation_fraction=0.2, The number of samples per gradient
n_iter_no_change=20, update
alpha=0.0001
) Specifies the L2 regularization term
MLP in Scikit Learn
Load and
Model Model Model
Preprocess the
Creation Training Evaluation
Data
MLP in TensorFlow
Load and
Model Model Model
Preprocess the
Creation Training Evaluation
Data

A dense layer with 16 neurons with 4 input


features and ReLU activation function
# Create the model
model = models.Sequential([ This layer randomly drops 30% of the
layers.Dense(16, activation='relu', input_shape=(4,)), neurons during training
layers.Dropout(0.3), Another dense layer with 8 neurons and
layers.Dense(8, activation='relu'), ReLU activation
layers.Dropout(0.2),
layers.Dense(3, activation='softmax') This layer randomly drops 20% of the
]) neurons during training
Output layer with 3 neurons and Softmax
# Compile the model
activation
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']) Compiling the model with adam optimizer,
categorical_crossentropy loss, and accuracy
metric
MLP in TensorFlow
Load and
Model Model Model
Preprocess the
Creation Training Evaluation
Data

This creates an early stopping callback

# Train the model with early stopping Training will stop if the validation loss does
early_stopping = tf.keras.callbacks.EarlyStopping( not improve for 20 consecutive epochs
monitor='val_loss',
patience=20, This ensures that the model will restore the weights
restore_best_weights=True from the epoch with the best validation loss
)

history = model.fit(X_train, y_train,


epochs=200,
Training the model for 200 epochs with
batch_size=32,
bacth_size 32, validation_split 0.2,
validation_split=0.2,
early_stopping being the callback, and
callbacks=[early_stopping],
verbosity 1.
verbose=1)
MLP in TensorFlow
Load and
Model Model Model
Preprocess the
Creation Training Evaluation
Data

# Evaluate the model


test_loss, test_accuracy = model.evaluate(X_test, y_test, verbose=0)

To evaluate the model's performance on the test data. It returns the loss
value and metrics specified during model compilation

Verbose - 0: Silent mode; no output will be shown


1: Progress bar will be displayed
2: One line per epoch will be displayed
MLP in TensorFlow
MLP in TensorFlow
Week 3 Exercises
1.Hardcoding MLP Feedforward Algorithm.
Objective: To understand and implement a basic Multi-Layer Perceptron (MLP)
feedforward algorithm using Python from scratch without any machine learning
libraries.

2. MLP Classifier Using Scikit-Learn for Iris Classification


Objective: Implement an MLP classifier using Scikit-Learn to classify the Iris dataset.
Perform data preprocessing, model building, training, and evaluation, including loss
and accuracy curves and a confusion matrix.

3. MLP Classifier Using TensorFlow for Iris Classification


Objective: Implement an MLP classifier using TensorFlow to classify the Iris dataset.
Perform data preprocessing, model building, training, and evaluation, including loss
and accuracy curves and a confusion matrix.

You might also like