12302080503005
Shah Meet
Practical 1
Write a code to read a dataset using the appropriate python
library and display it.
Code:
import pandas as pd
df=pd.read_csv("loan_dataset.csv")
size= df.size
print(size)
shape=df.shape
print(shape)
dimension=df.ndim
print(dimension)
print(df.describe(include='all'))
print(df)
Output:
202047804
Deep Learning & Applications
12302080503005
Shah Meet
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Practical 2
Write a code to create Perceptron algorithms for the following
logic gates:
1. AND:
Code:
import numpy as np w0 = w0 + Learning_Rate * delta *
a[i][0]
a = [(0, 0), (0, 1), (1, 0), (1, 1)] w1 = w1 + Learning_Rate * delta *
result = (0,0,0,1) a[i][1]
w0 = 0.9
w1 = 0.9 global_delta = global_delta + abs(delta)
th = -2.0 print(f"Epoch {j+1}, Data Point {i+1}:
Learning_Rate = 0.1 w0={w0:.4f}, w1={w1:.4f},
epoach = 200 delta={delta:.4f}, z={z:.4f},
global_delta = 0 output={output}")
print(f"Global Delta after Epoch {j+1}:
for j in range (0,epoach): {global_delta:.4f}")
global_delta = 0 if (global_delta == 0):
for i in range(0,len(result)): break
z = w0 * a[i][0] + w1 * a[i][1]
if z>th: print("\nfinal_weights:")
output = 1 for i in range(len(a)):
else: z = w0 * a[i][0] + w1 * a[i][1] – th
output = 0 print(f"w0: {w0}, w1: {w1}")
delta = result[i] - output
Output:
202047804
Deep Learning & Applications
12302080503005
Shah Meet
2. OR:
Code:
import numpy as np w0 = w0 + Learning_Rate * delta
* a[i][0]
a = [(0, 0), (0, 1), (1, 0), (1, 1)] w1 = w1 + Learning_Rate * delta
result = (0,1,1,1) * a[i][1]
w0 = 0.9
w1 = 0.9 global_delta = global_delta +
th = 2.0 abs(delta)
Learning_Rate = 0.1 print(f"Epoch {j+1}, Data Point
epoach = 200 {i+1}: w0={w0:.4f}, w1={w1:.4f},
global_delta = 0 delta={delta:.4f}, z={z:.4f},
output={output}")
for j in range (0,epoach): print(f"Global Delta after Epoch
global_delta = 0 {j+1}: {global_delta:.4f}")
for i in range(0,len(result)): if (global_delta == 0):
z = w0 * a[i][0] + w1 * a[i][1] break
if z>th:
output = 1 print("\nfinal_weights:")
else: for i in range(len(a)):
output = 0 z = w0 * a[i][0] + w1 * a[i][1] - th
delta = result[i] - output
print(f"w0: {w0}, w1: {w1}")
Output:
202047804
Deep Learning & Applications
12302080503005
Shah Meet
3. NOT:
Code:
import numpy as np w0 = w0 + Learning_Rate * delta *
a = [(0, 1)] a[i][0]
result = (1,0) w1 = w1 + Learning_Rate * delta *
w0 = 0.9 a[i][1]
w1 = 0.9
th = 2.0 global_delta = global_delta + abs(delta)
Learning_Rate = 0.1 print(f"Epoch {j+1}, Data Point {i+1}:
epoach = 200 w0={w0:.4f}, w1={w1:.4f},
global_delta = 0 delta={delta:.4f}, z={z:.4f},
output={output}")
for j in range (0,epoach): print(f"Global Delta after Epoch {j+1}:
global_delta = 0 {global_delta:.4f}")
for i in range(0,len(a)): # Changed from if (global_delta == 0):
len(result) to len(a) break
z = w0 * a[i][0] + w1 * a[i][1]
if z>th: print("\nfinal_weights:")
output = 1 for i in range(len(a)):
else: z = w0 * a[i][0] + w1 * a[i][1] - th
output = 0
delta = result[i] - output print(f"w0: {w0}, w1: {w1}"
Output:
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Practical 3
Implementation of Multi-layer network and study various
parameters for any application
Code:
import numpy as np #backpropogation
input = np.array([[2,3]]) delta = predicted_output - expected_output
expected_output = np.array([[1]]) print(delta)
epochs = 19 error = 0.5*(predicted_output -
expected_output)**2
ir = 0.1
error_hidden_layer =
inputLayerNeurons, hiddenLayerNeurons,
np.dot(delta,output_weights.T)
outputLayerNeurons = 2,2,1
print(error)
hidden_weights=
np.array([[.11,.12],[.21,.08]]) output_weights -=
hidden_layer_output.T.dot(delta) * ir
output_weights = np.array([[.14],[.15]])
print(output_weights)
print("Initial hidden weights: ",end='')
hidden_weights -=
print(*hidden_weights)
input.T.dot(error_hidden_layer) * ir
print("Initial output weights: ",end='')
print(hidden_weights)
print(*output_weights)
print("----------------")
#training algorithm
print("Final hidden_weights:",end='')
for _ in range(epochs):
print(*hidden_weights)
hidden_layer_output =
print("Final output_weights:",end='')
np.dot(input,hidden_weights)
print(*output_weights)
print(hidden_layer_output)
print("\n Output from the neural network
predicted_output =
after final epoch", end='')
np.dot(hidden_layer_output,output_weights)
print(*predicted_output)
print(predicted_output)
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Output:
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Practical 4
Study of Tensor Flow, Keras and PyTorch Frameworks
Deep learning frameworks are libraries that simplify the development and training of complex
neural networks. These frameworks provide pre-built and optimized components such as tensor
manipulation, automatic differentiation, neural network layers, optimizers, and GPU acceleration
support. Among these, TensorFlow, Keras, and PyTorch are the most widely used in both
academic research and industrial applications.
TensorFLow:
Developer: Google Brain
Initial Release: 2015
Language: Python, C++ (with bindings for other languages)
Latest Version: TensorFlow 2.x
Overview:
TensorFlow is an end-to-end open-source platform for machine learning. It offers a
comprehensive ecosystem with tools for model building, training, and deployment at scale.
Initially built around static computational graphs, TensorFlow 2.x adopted a more dynamic and
Pythonic approach, largely integrating Keras as its official high-level API.
Key Features:
• Ecosystem: TensorFlow Lite (mobile), TensorFlow.js (browser), TensorBoard
(visualization), TF-Serving (deployment).
• Performance: Highly optimized for CPU and GPU usage, supports TPU acceleration.
• Scalability: Enables distributed training across GPUs, TPUs, and multiple devices.
• Serialization: Models can be saved and exported for use across platforms using the
SavedModel format.
• Automatic Differentiation: Supports gradient calculation for backpropagation using
tf.GradientTape.
Use Cases:
• Industrial deployment (e.g., Google Search, Gmail)
• Research requiring production-level scalability
• Time series prediction, object detection, NLP
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Keras:
Developer: François Chollet
Initial Release: 2015
Language: Python
Latest Version: Integrated with TensorFlow 2.x (tf.keras)
Overview:
Keras is a high-level neural networks API that enables fast experimentation. It was originally
designed to be modular and extensible, running on top of multiple backends like Theano, CNTK,
and TensorFlow. With the release of TensorFlow 2.x, Keras became tightly integrated and is now
the official high-level API for TensorFlow.
Key Features:
• User-Friendly: Intuitive API for beginners and researchers alike.
• Modular Design: Each component (layer, loss, optimizer) is a standalone module that can
be reused.
• Pre-trained Models: Supports many state-of-the-art models (e.g., ResNet, MobileNet).
• Rapid Prototyping: Simplifies model development with functions like Sequential and
Model.
Strengths:
• Simplified training using .fit(), .evaluate(), and .predict().
• Clean syntax and readable code, ideal for education and experimentation.
• Strong integration with TensorFlow tools like TensorBoard and tf.data.
Use Cases:
• Educational purposes
• Small to medium research experiments
• Quick model prototyping and testing
202047804
Deep Learning & Applications
12302080503005
Shah Meet
PyTorch:
Developer: Facebook AI Research (FAIR)
Initial Release: 2016
Language: Python (with C++ backend)
Latest Version: PyTorch 2.x
Overview:
PyTorch is a popular open-source deep learning framework known for its dynamic computation
graph, making it more intuitive and easier to debug compared to static graph frameworks. Its
imperative style is closer to Python’s native programming flow, making it highly suitable for
research.
Key Features:
• Dynamic Computation Graphs: Graphs are built on-the-fly during runtime, providing
more flexibility.
• Pythonic: Tightly integrated with Python, enabling seamless debugging and
customization.
• TorchScript: Allows transition from eager mode to static graph for deployment.
• ONNX Support: Enables export of PyTorch models for cross-framework compatibility.
Libraries & Tools:
• TorchVision, TorchText, TorchAudio for domain-specific datasets and models
• Lightning / Ignite / HuggingFace support for higher-level abstractions
• Native AMP (Automatic Mixed Precision) for better training performance on GPUs
Use Cases:
• Cutting-edge AI research
• Reinforcement learning
• Custom architectures requiring runtime control
10
202047804
Deep Learning & Applications
12302080503005
Shah Meet
Feature TensorFlow Keras PyTorch
Static (TF1), Dynamic
Graph Type Static Dynamic
(TF2)
Ease of Use Moderate Very Easy Easy
Less Intuitive (TF1), Better
Model Debugging Very Intuitive Excellent
(TF2)
Flexibility High Moderate Very High
Deployment Tools TensorFlow Lite, TFX Yes (via TF) TorchScript, ONNX
GPU Support Excellent (CUDA, TPU) Excellent Excellent (CUDA)
Community & Strong, growing Strong, Research
Large, Production Focused
Ecosystem rapidly Focused
Steep (TF1), Moderate
Learning Curve Low Moderate
(TF2)
11
202047804
Deep Learning & Applications