FR.
CONCEICAO RODRIGUES COLLEGE OF ENGINEERIG
Department of Computer Engineering
Academic Year 2025 - 2026 Estimated Time Experiment No. 1– 02 Hours
Course & Semester TE V & CE Subject Name Deep Learning & Reinforcement
Learning Lab
Module No. 01 Chapter Title Neural Networks
Experiment Type Software Performance Subject Code 25PEC13CE12
Name of Manav Shetty Roll No. 10279
Student
Date of 18/07/2025 Date of 15/08/2025
Performance.: Submission.:
CO Mapping CO1: Comprehend the concepts of Neural Network.
Logic & Post Lab Questions Active Participation Timely Total (10)
Problem-Solving attempted and attentiveness/ submission
Approach / Coding (2 Marks) behavior in Lab (2 Marks)
Skills (3 Marks)
(3 Marks)
Objective of Experiment:
A. Implement a single-layer perceptron to classify linearly separable data.
B. Design and train an MLP for solving XOR classification.
Pre-Requisite:
Tools: Python, Jupyter Notebook, Numpy
Theory:
1) Artificial Neuron: Neuron is a processing element inspired by how the brain works. Similar
to biological neuron, each artificial neuron will be do some computation. Each neuron is
interconnected to other neurons. Similar to brain, the interconnections between neurons store
the knowledge it learns. The knowledge is stored as parameters.
Figure 1: A biological neuron and an artificial neuron [1]
2) Properties of Artificial Neural Nets (ANNs)
● Many neuron-like threshold switching units.
● Many weighted interconnections among units.
● Highly parallel, distributed process.
● Emphasis on tuning parameters or weights automatically.
3) Perceptron
One type of ANN system is based on a unit called a perceptron. A perceptron takes a vector of
real-valued inputs, calculates a linear combination of these inputs, then outputs a 1 if the result is
greater than some threshold and-1 otherwise.
Figure 2: Perceptron [2]
4) Representing Logic Gates using Perceptron
Any linear decision can be represented by either a linear regression equation or a Boolean equation. In
this experiment, we will represent the above using a Perceptron.
Example: Perceptron for AND gate
Figure 3: Perceptron for AND gate
^
Perceptron equation is: 𝑦 = 𝑤𝑜𝑥𝑜 + 𝑤1𝑥1 + 𝑤2𝑥2 .
ℎ > 0 for output to be 1.
Figure 3: Truth table of AND Gate [3]
Table 1: Perceptron for AND
𝑥1 𝑥2 t h
-1 -1 -1 𝑤0 + 𝑤1(− 1) + 𝑤2(− 1) < 0
-1 1 -1 𝑤0 + 𝑤1(− 1) + 𝑤2(1) < 0
1 -1 -1 𝑤0 + 𝑤1(1) + 𝑤2(− 1) < 0
1 1 1 𝑤0 + 𝑤1(1) + 𝑤2(1) > 0
Solving for the inequalities at each row of the truth table, one solution is
𝑤1 = 𝑤2 = 2 , 𝑤0 = − 1 . This gives a beautiful linear decision boundary.
Figure 4: Decision Boundary for AND, OR and XOR Gates [5].
5) Perceptron Learning Algorithm
𝑤𝑖 ← 𝑤𝑖 + ∆𝑤𝑖 where ∆𝑤𝑖 = η (𝑡 − 𝑜)𝑥𝑖
→
𝑡 = 𝑐(𝑥) is the target value,
𝑜 is perceptron output
η is small constant (e.g., 0.1) called Learning Rate
Convergence of Perceptron Learning Algorithm
It can be proved that the algorithm will converge if training data is linearly separable. Learning rate is
sufficiently small. The role of the learning rate is to moderate the degree to which weights are
changed at each step. It is usually set to some small value (e.g., 0.1) and is sometimes made to decay
as the number of weight-tuning iterations increases.
Representational Power of perceptron: A perceptron represents a hyperplane decision surface in the
n-dimensional space of examples. The perceptron outputs a 1 for examples lying on one side of the
hyperplane and outputs a -1 for examples lying on the other side.
6) Non-linearly Separable Data:
Two groups of data points are non-linearly separable in a 2-dimensional space if they cannot
be easily separated with a linear line.
Perceptron for XOR Gate
The challenge is that data is non-linearly separable as seen in the Decision boundary for XOR (fig. 4).
Figure 5: Truth table of XOR Gate [3]
Solution for XOR [6]
I. Use multi-layer perceptron (MLP)
II. Introduce another layer in between the input and the output)
III. This in-between layer is called hidden layer.
Figure 6: Neural Network Model for learning XOR [8]
𝑊11 = 𝑊21 = 𝑊13 = 𝑊23 = 1 , 𝑊12 = 𝑊22 =− 1
The threshold 𝑡ℎ for Hidden node 𝑛1 is 1, for Hidden node 𝑛2 is − 1 and for Output node 𝑛3 is 2.
Table 2: MLP for XOR
𝑥1 𝑥2 𝑛1 𝑛2 𝑛3 =
𝑥1 𝑥𝑜𝑟 𝑥2
0 0 0 · 1 + 0 · 1 = 0 ≯𝑡ℎ 0 · (− 1) + 0 · (− 1) = 0 > 𝑡ℎ 0⋅1 + 1⋅1 = 1 ≯ 𝑡ℎ
𝑛1 = 0 𝑛2 = 1 𝑛3 = 0
0 1 0 · 1 + 1 · 1 = 1 ≥ 𝑡ℎ 0 · (− 1) + 1 · (− 1) = − 1≥ 1⋅1 + 1⋅1 = 2 ≥ 𝑡ℎ
𝑛1 = 1 𝑛2 = 1 𝑛3 = 1
1 0 1 · 1 + 0 · 1 = 1 ≥ 𝑡ℎ 1 · (− 1) + 0 · (− 1) = − 1≥ 1⋅1 + 1⋅1 = 2 ≥ 𝑡ℎ
𝑛1 = 1 𝑛2 = 1 𝑛3 = 1
1 1 1 · 1 + 1 · 1 = 2 ≥ 𝑡ℎ 1 · (− 1) + 1 · (− 1) = − 2≯ 1⋅1 + 0⋅1 = 1 ≯ 𝑡ℎ
𝑛1 = 1 𝑛2 = 0 𝑛3 = 0
The Challenge is how to learn the parameters (weights) and threshold? The solution for learning is
to Use the gradient descent algorithm, which we will cover in Experiment 3 (Optimization of Deep
Neural Networks).
7)
Power of Multilayer Perceptions (MLP):
i. MLP can be used for both binary and multiclass classification.
ii. Regression where outputs are real values.
iii. Representing complex decision boundaries. A MLP with 2 hidden layers can be used to
represent complex decision boundaries one perceptron is responsible to realize one straight
line.
iv. Representing Boolean functions.
MLP is called a universal approximator for the above reasons.
Problem Description:
1. Implement a single-layer perceptron to classify linearly separable data (AND gate
Implementation).
Input
[0, 0]
[0, 1]
[1, 0]
[1, 1]
Methods used in Libraries
NumPy Functions
● np.array() – To store training input (x) and output (y) data.
● np.dot(a, b) – To compute the dot product between input vector and weights.
● np.zeros() – To initialize weights with zero values.
2. Custom Functions
● step(x) – Step activation function; returns 1 if x≥0x \ge 0x≥0 else 0.
● fit(x, y) – Trains the perceptron using the perceptron learning rule.
● predict(x) – Generates prediction for given input using learned weights and bias.
●
Algorithm
Initialize:
● Set all weights to zero.
● Set bias to zero.
● Set learning rate (lr) and number of epochs.
Training (fit method):
● Repeat for a fixed number of epochs:
1. For each training sample (x_i, y_i):
■ Compute weighted sum:
z=(xi⋅w)+bz = (x_i \cdot w) + bz=(xi⋅w)+b
■ Apply step activation:
ypred=1y_{pred} = 1ypred=1 if z≥0z \ge 0z≥0 else 000
■ Compute error:
error=yi−yprederror = y_i - y_{pred}error=yi−ypred
■ Update weights:
w=w+η×error×xiw = w + \eta \times error \times x_iw=w+η×error×xi
■ Update bias:
b=b+η×errorb = b + \eta \times errorb=b+η×error
Prediction (predict method):
● For a given input:
○ Compute weighted sum.
○ Apply step activation to determine output (0 or 1).
Output:
● Print the predictions for all possible inputs after training.
Algorithm
Initialize data:
● Inputs (X) are the 4 possible combinations of (0,1) for XOR.
● Pre-defined weights (W1, W2) and biases (b1, b2) are set manually to make the network
solve XOR.
For each input sample:
1. Hidden layer computation:
○ Compute weighted sum:
z1=x⋅W1+b1z_1 = x \cdot W_1 + b_1z1=x⋅W1+b1
○ Apply sigmoid activation:
a1=σ(z1)a_1 = \sigma(z_1)a1=σ(z1)
2. Output layer computation:
○ Compute weighted sum:
z2=a1⋅W2+b2z_2 = a_1 \cdot W_2 + b_2z2=a1⋅W2+b2
○ Apply step activation:
Output = 1 if z2≥0.5z_2 \ge 0.5z2≥0.5, else 0
Print results for each input combination.
Program
import numpy as np
def step(x):
return 1 if x >= 0 else 0
class Perceptron:
def __init__(self, inp_dim, lr_rate=0.1, epochs=10):
self.weight = np.zeros(inp_dim)
self.bias = 0
self.lr = lr_rate
self.epochs = epochs
def fit(self, x, y):
for epoch in range(self.epochs):
for x_i, y_i in zip(x, y):
z = np.dot(x_i, self.weight) + self.bias
y_pred = step(z)
error = y_i - y_pred
self.weight += self.lr * error * x_i
self.bias += self.lr * error
def predict(self, x):
z = np.dot(x, self.weight) + self.bias
return step(z)
x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
model = Perceptron(inp_dim=2)
model.fit(x, y)
for i in x:
print(i, " => ", model.predict(i))
Output
2. Design and train an MLP for solving XOR Problem.
Input
[0, 0]
[0, 1]
[1, 0]
[1, 1]
Methods used in Libraries
NumPy Functions
● np.array() – To create matrices and vectors for inputs, weights, and biases.
● np.dot(a, b) – For matrix multiplication between inputs/activations and weights.
● np.exp() – Used inside the sigmoid() function to calculate exponentials.
2. Custom Functions
● sigmoid(x) – Activation function for the hidden layer, squashes values into (0, 1).
● step(x) – Threshold function for final output, returns 1 if ≥ 0.5, else 0.
Program
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def step(x):
return 1 if x >= 0.5 else 0
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
W1 = np.array([[20, 20], [20, 20]])
b1 = np.array([-10, -30])
W2 = np.array([[20], [-20]])
b2 = -10
print("MLP XOR Gate Output:")
for x in X:
z1 = np.dot(x, W1) + b1
a1 = sigmoid(z1)
z2 = np.dot(a1, W2) + b2
output = step(z2)
print(x, " => ", output)
Output
Post Lab Questions to be attempted:
Postlab:
References:
Study Materials Online repositories:
1. Dive into Deep Learning by Aston 1.
Zhang, Zack C. Lipton, Mu Li, Alex J. https://medium.com/@lmpo/the-evolution-of-artificia
Smola. l-neurons-90619f224f63
https://d2l.ai/chapter_introduction/index.ht 2.
ml https://www.gabormelli.com/RKB/Perceptron_Functi
2. Deep Learning by Ian Goodfellow, on
Yoshua Bengio, Aaron Courville 3.
https://www.deeplearningbook.org/ . https://medium.com/@stanleydukor/neural-representa
3. Deep Learning with Python by Francois tion-of-and-or-not-xor-and-xnor-logic-gates-perceptr
Chollet. 1st Edition. Manning Publications on-algorithm-b0275375fea1
https://livebook.manning.com/book/deep-l 4.
earning-with-python/part-1/ https://www.geeksforgeeks.org/implementation-of-pe
4. rceptron-algorithm-for-and-logic-gate-with-2-bit-bina
https://rodsmith.nz/wp-content/uploads/Mi ry-input/
nsky-and-Papert-Perceptrons.pdf 5.
https://towardsdatascience.com/the-definitive-percept
ron-guide-fd384eb93382
6.
https://priyansh-kedia.medium.com/solving-the-xor-p
roblem-using-mlp-83e35a22c96f
7.
https://www.geeksforgeeks.org/implementation-of-pe
rceptron-algorithm-for-xor-logic-gate-with-2-bit-bina
ry-input/
8.
https://codingvision.net/c-backpropagation-tutorial-x
or
Video Channels:
1. https://www.youtube.com/watch?v=YOpzVjSzi3c 2.2 AND gate using Perceptron OU Education
2. https://www.youtube.com/watch?v=dM_8Y41EgsY Perceptron Rule to design XOR Logic Gate
Solved Example ANN Machine Learning by Mahesh Huddar
3. https://www.youtube.com/watch?v=yk5-Dffz22Y Perceptron Learning Algorithm Artificial
Neural Network ANN Machine Learning by Mahesh Huddar
4. https://www.youtube.com/watch?v=wQ8BIBpya2k Deep Learning with Python, TensorFlow, and
Keras tutorial @sentdex
5. https://www.youtube.com/watch?v=aOEoxyA4uXU How to implement Perceptron from scratch
with Python @AssemblyAI
6. https://www.youtube.com/watch?v=pKW7Z7JZosA The Roots of AI: XOR Problem (1969)
@thetimesofai