SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
COLLEGE OF SCIENCE AND HUMANITIES
DEPARTMENT OF COMPUTER APPLICATIONS
PRACTICAL RECORD NOTE
STUDENT NAME : Shobica . A
REGISTER NUMBER : RA2132014010057
CLASS : Msc. ADS Section : B
YEAR & SEMESTER : II Year 3rd Semester
SUBJECT CODE : PAD21301J
DEEP LEARNING FOR DATA SCIENCE
SUBJECT TITLE :
LABORATORY
NOVEMBER 2022
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
COLLEGE OF SCIENCE AND HUMANITIES
DEPARTMENT OF COMPUTER APPLICATIONS
SRM Nagar, Kattankulathur – 603 203
CERTIFICATE
Certified to be the bonafide record of practical work done by
___Shobica.A___RegisterNo. RA2132014010057 of Msc.Applied
Data Science Degree course for PAD21301J– DEEP LEARNING FOR
DATA SCIENCE LABORATORY in the Computer lab in SRM Institute of Science
and Technology during the academic year 2022-2023.
Staff In-charge Head of
the Department
Submitted for Semester Practical Examination held on
__________________.
Internal Examiner
External Examiner
INDEX
S.No TITLE OF THE EXPERIMENT Page No. Staff Sign.
1. Implement a Perceptron in Python
2. Implement a Feed Forward Neural Network with
Back propagation training algorithm for realizing
XOR problem
3. Build a NN model using PyTorch
4. Implement ANN Training in Python for MNIST
Digit Classification problem
5. Perform Hyper parameter tuning in an ANN
model
6. Implement LVQ Network for Pattern
Classification
7. Work on a text classification problem with Keras
API Dataset for Neural Network
8. Implement Batch Normalization and gauge its
performance
9. Using Keras, perform rate adaption schedule
10. Build a CNN model for Image Classification
11. Build a DL model for diabetes classification
problem
12. Design and build a Game environment
EX.NO: 1
DATE: 13/7/22
Implement a Perceptron in Python
AIM:
To write a python code to implement perceptron algorithm.
ALGORITHM (OR) PROCEDURE:
# Importing Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
# AND Data
X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])
y = np.array([0, 0, 0, 1])
# Input shape
input_size = X.shape[1]
# Initializing the weights and bias as one
parameters = np.ones(input_size + 1)
# Activation function
def activation_function(z):
'''
if the value is above 0 then it'll be 1
if the value is below zero it'll be 0
'''
if z >= 0:
return 1
else:
return 0
# plotting the decision boundary
def plot(epoch):
m = -(parameters[1]/parameters[2])
c = -(parameters[0]/parameters[2])
x_input = np.linspace(-3,3,100)
y_input = m*x_input + c
plt.figure(figsize=(5,3))
plt.plot(x_input,y_input,color='red',linewidth=3)
plt.scatter(X[:,0],X[:,1],c=y,cmap='winter',s=200)
plt.suptitle(f'epoch {epoch}')
plt.ylim(-1,2)
plt.xlim(-1,2)
# Training the model
epochs = 7 # no of epochs
learning_rate = 0.2 # controls the pace of the learning (weight)
for epoch in range(epochs):
for i in range(X.shape[0]):
x = np.insert(X[i], 0, 1)
print(x)
z = parameters.dot(x) # z = w1*x1 + w2*x2 +b
print(z)
y_hat = activation_function(z)
print(y_hat)
e = y[i] - y_hat
print(e)
# updating the weights and bias
parameters = parameters + learning_rate * e * x
#ploting the decision boundary for better understanding
plot(epoch)
# weights and bias
parameters
OUTPUT:
[1 0 0]
1.0
1
-1
[1 0 1]
1.8
1
-1
[1 1 0]
1.6
1
-1
0
[1 0 1]
-0.1999999999999999
0
0
[1 1 0]
-0.1999999999999999
0
0
[1 1 1]
0.20000000000000018
1
0
RESULT:
Hence, perceptron algorithm is implemented successfully.
EX.NO: 2
DATE: 21/7/22
Implement a Feed Forward Neural Network with Back propagation training
algorithm for realizing XOR problem
AIM:
To write a python code for implementing a feed forward neural network training algorithm
for realizing XOR problem.
ALGORITHM (OR) PROCEDURE:
import numpy as np
def sigmoid (x): #Activation funcion
return 1/(1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
#Input datasets
inputs = np.array([[0,0],[0,1],[1,0],[1,1]]) #xor values
expected_output = np.array([[0],[1],[1],[0]])
epochs = 1000
lr = 1
inputLayerNeurons, hiddenLayerNeurons, outputLayerNeurons = 2,3,1 #no.of nodes in
each layer
#assigning random weights and bias initialization
hidden_weights = np.random.uniform(size=(inputLayerNeurons,hiddenLayerNeurons))
hidden_bias =np.random.uniform(size=(1,hiddenLayerNeurons))
output_weights = np.random.uniform(size=(hiddenLayerNeurons,outputLayerNeurons))
output_bias = np.random.uniform(size=(1,outputLayerNeurons))
print("Initial hidden weights: ",end='')
print(*hidden_weights)
print("Initial hidden biases: ",end='')
print(*hidden_bias)
print("Initial output weights: ",end='')
print(*output_weights)
print("Initial output biases: ",end='')
print(*output_bias)
#Training algorithm
for _ in range(epochs):
#Forward Propagation
hidden_layer_activation = np.dot(inputs,hidden_weights)
hidden_layer_activation += hidden_bias
hidden_layer_output = sigmoid(hidden_layer_activation)
output_layer_activation = np.dot(hidden_layer_output,output_weights)
output_layer_activation += output_bias
predicted_output = sigmoid(output_layer_activation)
#Backpropagation
error = expected_output - predicted_output
d_predicted_output = error * sigmoid_derivative(predicted_output)
error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)
#Updating Weights and Biases
output_weights += hidden_layer_output.T.dot(d_predicted_output) * lr
output_bias += np.sum(d_predicted_output,axis=0,keepdims=True) * lr
hidden_weights += inputs.T.dot(d_hidden_layer) * lr
hidden_bias += np.sum(d_hidden_layer,axis=0,keepdims=True) * lr
print("Final hidden weights: ",end='')
print(*hidden_weights)
print("Final hidden bias: ",end='')
print(*hidden_bias)
print("Final output weights: ",end='')
print(*output_weights)
print("Final output bias: ",end='')
print(*output_bias)
print("\nOutput from neural network after 10,000 epochs: ",end='')
print(*predicted_output)
OUTPUT:
Initial hidden weights: [0.13613836 0.00754638 0.94823406] [0.40611622 0.0992042
0.67573325]
Initial hidden biases: [0.51832308 0.36281206 0.18131413]
Initial output weights: [0.28009879] [0.98647565] [0.53423953]
Initial output biases: [0.63896191]
Final hidden weights: [ 3.35342538 -1.58272688 6.1179609 ] [ 3.62838323 -0.4687834
6.0793803 ]
Final hidden bias: [-5.32783835 0.90616134 -2.55454993]
Final output weights: [-7.95103106] [1.32264816] [7.88399793]
Final output bias: [-4.30744287]
Output from neural network after 10,000 epochs: [0.05550374] [0.94906113] [0.94451577]
[0.05852947]
RESULT:
Therefore, feed forward neural network training algorithm for realizing XOR problem
implemented successfully.
EX.NO: 3
DATE: 27/7/22
Build a Neural Network model using PyTorch
AIM:
To write a python program on building a neural network model using pytorch.
ALGORITHM (OR) PROCEDURE:
# importing libraries
import torch
import numpy as np
import pandas as pd
import torch.nn as nn
from sklearn import datasets
from sklearn.model_selection import train_test_split
# iris dataset
x, y = datasets.load_iris(return_X_y=True)
y = np.array(pd.get_dummies(y))
x = torch.Tensor(x)
y = torch.Tensor(y)
# spliting the data into training and testing
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=32)
# no of input layers
n_input_layer = 4
# no of hidden layers
n_hidden_layer = 30
# no of output layers
n_output_layer = 3
# learning rate
learning_rate = 0.01
# model architecture
model = nn.Sequential(nn.Linear(n_input_layer, n_hidden_layer),
nn.ReLU(),
nn.Linear(n_hidden_layer, n_output_layer),
nn.Softmax())
print(model)
# loss function
loss_function = nn.CrossEntropyLoss()
# optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loss = []
test_loss = []
for epoch in range(5000):
# training
train_pred = model(x_train)
loss = loss_function(train_pred, y_train)
train_loss.append(loss.item())
model.zero_grad()
loss.backward()
optimizer.step()
# testing
test_pred = model(x_test)
loss = loss_function(test_pred, y_test)
test_loss.append(loss.item())
import matplotlib.pyplot as plt
plt.plot(train_loss, c=’r’)
plt.plot(test_loss, c=’b’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.title(“Learning rate %f”%(learning_rate))
plt.legend()
plt.show()
# accuracy
def accuracy(prediction, y):
prediction = prediction.detach().numpy()
y = y.detach().numpy()
count = 0
for I in range(len(y)):
if np.argmax(prediction[i]) == np.argmax(y[i]):
count += 1
return (count/len(y))*100
accuracy(test_pred, y_test)
OUTPUT:
Sequential(
(0): Linear(in_features=4, out_features=30, bias=True)
(1): ReLU()
(2): Linear(in_features=30, out_features=3, bias=True)
(3): Softmax(dim=None)
)
WARNING:matplotlib.legend:No handles with labels found to put in legend.
100.0
RESULT:
Hence, neural network model using pytorch created successfully.
EX.NO: 4
DATE: 3/8/22
Implement ANN Training in Python for MNIST Digit Classification problem
AIM:
To write a python program to implement an ANN training in python for MNIST digit
classification problem.
ALGORITHM (OR) PROCEDURE:
#IMPORT LIBRARIES
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from tensorflow import keras
from sklearn.metrics import accuracy_score, confusion_matrix
#LOADING THE MNIST HANDWRITING DATA
(x_train, y_train), (x_test, y_test) = mnist.load_data()model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
x_train.shape
x_train[1]
x_train[1].shape
x_train[1]model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
#NORMALIZE THE IMAGE TO 0 - 1
x_train = x_train/255
x_test = x_test/255
#PLOTTING THE IMAGE
plt.imshow(x_train[1]);
#RESHAPING THE PIXELS FROM 28X28 TO 784
x_train = np.reshape(x_train, (60000, 784))
x_test = np.reshape(x_test, (10000, 784))
print(x_train.shape)
print(x_test.shape)
model = keras.models.Sequential(
[
keras.layers.Dense(units=512, activation='relu', input_shape=(784,)),
keras.layers.Dense(units=200, activation='relu'),
keras.layers.Dense(units=10, activation='softmax')
]
)
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(x_train, y_train, epochs=2)
model.evaluate(x_test, y_test)
prediction = model.predict(x_test)
predictions = []
for i in range(len(y_test)):
predictions.append(np.argmax(prediction[i]))
matrix = confusion_matrix(predictions, y_test)
import seaborn as sns
plt.figure(figsize=(10,6))
sns.heatmap(matrix, annot=True);
matrix
OUTPUT:
(60000, 28, 28)
(28, 28)
array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 51, 159, 253, 159, 50, 0, 0, 0, 0, 0,
0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 238, 252, 252, 252, 237, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 54, 227, 253, 252, 239, 233, 252, 57, 6, 0, 0, 0, 0, 0, 0], [ 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 60, 224, 252, 253, 252, 202, 84, 252, 253, 122, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 163, 252, 252, 252, 253, 252, 252, 96, 189, 253, 167, 0, 0, 0, 0,
0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 51, 238, 253, 253, 190, 114, 253, 228, 47, 79, 255, 168, 0, 0,
0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 238, 252, 252, 179, 12, 75, 121, 21, 0, 0, 253, 243, 50,
0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 38, 165, 253, 233, 208, 84, 0, 0, 0, 0, 0, 0, 253, 252, 165,
0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 7, 178, 252, 240, 71, 19, 28, 0, 0, 0, 0, 0, 0, 253, 252, 195, 0,
0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 57, 252, 252, 63, 0, 0, 0, 0, 0, 0, 0, 0, 0, 253, 252, 195, 0, 0, 0,
0, 0], [ 0, 0, 0, 0, 0, 0, 0, 198, 253, 190, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 253, 196, 0, 0, 0, 0, 0], [
0, 0, 0, 0, 0, 0, 76, 246, 252, 112, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 253, 252, 148, 0, 0, 0, 0, 0], [ 0, 0,
0, 0, 0, 0, 85, 252, 230, 25, 0, 0, 0, 0, 0, 0, 0, 0, 7, 135, 253, 186, 12, 0, 0, 0, 0, 0], [ 0, 0, 0, 0,
0, 0, 85, 252, 223, 0, 0, 0, 0, 0, 0, 0, 0, 7, 131, 252, 225, 71, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0,
85, 252, 145, 0, 0, 0, 0, 0, 0, 0, 48, 165, 252, 173, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 86,
253, 225, 0, 0, 0, 0, 0, 0, 114, 238, 253, 162, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 85, 252,
249, 146, 48, 29, 85, 178, 225, 253, 223, 167, 56, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 85,
252, 252, 252, 229, 215, 252, 252, 252, 196, 130, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0,
0, 28, 199, 252, 252, 253, 252, 252, 233, 145, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0,
0, 0, 25, 128, 252, 253, 252, 141, 37, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
dtype=uint8)
(60000, 784)
(10000, 784)
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 512) 401920
dense_7 (Dense) (None, 200) 102600
dense_8 (Dense) (None, 10) 2010
=================================================================
Total params: 506,530
Trainable params: 506,530
Non-trainable params: 0
Epoch 1/2
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1871 -
accuracy: 0.9428
Epoch 2/2
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0796 -
accuracy: 0.9749
<keras.callbacks.History at 0x7fe740b96390>
313/313 [==============================] - 1s 3ms/step - loss: 0.0858 - accuracy:
0.9728
[0.08580448478460312, 0.9728000164031982]
array([[ 972, 0, 4, 0, 1, 3, 10, 1, 2, 2], [ 1, 1127, 0, 0, 2, 3, 4, 8, 0, 5], [ 2, 3, 1017, 10, 5, 0, 0,
12, 6, 1], [ 0, 1, 0, 989, 0, 41, 1, 2, 6, 6], [ 0, 0, 0, 0, 967, 1, 2, 4, 8, 20], [ 0, 1, 0, 0, 0, 822, 1,
0, 1, 0], [ 2, 1, 1, 0, 4, 6, 939, 0, 1, 1], [ 1, 0, 5, 4, 1, 1, 0, 987, 3, 3], [ 2, 2, 5, 3, 0, 12, 1, 2,
941, 4], [ 0, 0, 0, 4, 2, 3, 0, 12, 6, 967]])
RESULT:
Hence, ANN Training in Python for MNIST Digit Classification problem is implemented
successfully.
EX.NO: 5
DATE: 16/8/22
Perform Hyper parameter tuning in an ANN model
AIM:
To write a python program to perform hyper parameter tuning in ANN model.
ALGORITHM (OR) PROCEDURE:
import numpy as np
import keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense,Flatten
inputs = np.array([[0,0],[0,1],[1,0],[1,1]])
expected_output = np.array([[0],[1],[1],[0]])
inputs[0].shape, expected_output.shape
# Create
tf.random.set_seed(42)
model_xor = Sequential()
model_xor.add(Flatten(input_shape = (2, 1)))
model_xor.add(Dense(2, activation='tanh'))
model_xor.add(Dense(1, activation='relu'))
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.SGD(lr=0.0001),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 20)
# Predict
# np.round(model_xor.predict(np.array([[0, 1]])))
# Create
tf.random.set_seed(42)
model_xor = Sequential()
model_xor.add(Flatten(input_shape = (2, 1)))
model_xor.add(Dense(2, activation='relu'))
model_xor.add(Dense(1, activation='relu'))
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.SGD(lr=0.0001),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 20)
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.SGD(lr=0.01),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 20)
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.SGD(lr=0.01),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 30)
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.SGD(lr=0.01),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 25, batch_size = 2)
# Create
tf.random.set_seed(42)
model_xor = Sequential()
model_xor.add(Flatten(input_shape = (2, 1)))
model_xor.add(Dense(4, activation='relu'))
model_xor.add(Dense(1, activation='relu'))
# Compile
model_xor.compile(loss = tf.keras.losses.binary_crossentropy,
optimizer = tf.keras.optimizers.Adam(lr=0.01),
metrics = ['accuracy'])
# Fit
model_xor.fit(inputs, expected_output, epochs = 25)
OUTPUT:
((2,), (4, 1))
Epoch 1/20
1/1 [==============================] - 0s 384ms/step - loss: 4.2552 - accuracy:
0.5000
Epoch 2/20
1/1 [==============================] - 0s 10ms/step - loss: 4.2550 - accuracy:
0.5000
Epoch 3/20
1/1 [==============================] - 0s 9ms/step - loss: 4.2549 - accuracy:
0.5000
Epoch 4/20
1/1 [==============================] - 0s 8ms/step - loss: 4.2549 - accuracy:
0.5000
Epoch 5/20
1/1 [==============================] - 0s 8ms/step - loss: 4.2548 - accuracy:
0.5000
Epoch 19/20
1/1 [==============================] - 0s 10ms/step - loss: 4.2535 - accuracy:
0.5000
Epoch 20/20
1/1 [==============================] - 0s 10ms/step - loss: 4.2534 - accuracy:
0.5000
<keras.callbacks.History at 0x7fc977a8fd90>
Epoch 1/20
1/1 [==============================] - 1s 526ms/step - loss: 4.2472 - accuracy:
0.5000
Epoch 2/20
1/1 [==============================] - 0s 10ms/step - loss: 2.7164 - accuracy:
0.5000
Epoch 3/20
1/1 [==============================] - 0s 11ms/step - loss: 0.6708 - accuracy:
0.5000
Epoch 4/20
1/1 [==============================] - 0s 11ms/step - loss: 0.6707 - accuracy:
0.5000
Epoch 5/20
1/1 [==============================] - 0s 10ms/step - loss: 0.6707 - accuracy:
0.5000
Epoch 19/20
1/1 [==============================] - 0s 8ms/step - loss: 0.6698 - accuracy:
0.5000
Epoch 20/20
1/1 [==============================] - 0s 10ms/step - loss: 0.6698 - accuracy:
0.5000
<keras.callbacks.History at 0x7fc977992610>
Epoch 1/20
1/1 [==============================] - 0s 357ms/step - loss: 0.6697 - accuracy:
0.5000
Epoch 2/20
1/1 [==============================] - 0s 12ms/step - loss: 0.6640 - accuracy:
0.5000
Epoch 3/20
1/1 [==============================] - 0s 10ms/step - loss: 0.6587 - accuracy:
0.7500
Epoch 4/20
1/1 [==============================] - 0s 11ms/step - loss: 0.6539 - accuracy:
0.7500
Epoch 5/20
1/1 [==============================] - 0s 10ms/step - loss: 0.6495 - accuracy:
0.7500
Epoch 19/20
1/1 [==============================] - 0s 10ms/step - loss: 0.6115 - accuracy:
0.7500
Epoch 20/20
1/1 [==============================] - 0s 8ms/step - loss: 0.6097 - accuracy:
0.7500
<keras.callbacks.History at 0x7fc977ed8e90>
1/1 [==============================] - 0s 9ms/step - loss: 0.6064 - accuracy:
0.7500
Epoch 3/30
1/1 [==============================] - 0s 8ms/step - loss: 0.6048 - accuracy:
0.7500
Epoch 4/30
1/1 [==============================] - 0s 7ms/step - loss: 0.6032 - accuracy:
0.7500
Epoch 5/30
1/1 [==============================] - 0s 8ms/step - loss: 0.6021 - accuracy:
0.7500
Epoch 30/30
1/1 [==============================] - 0s 9ms/step - loss: 0.5828 - accuracy:
0.7500
<keras.callbacks.History at 0x7fc977ec0350>
Epoch 1/25
2/2 [==============================] - 0s 13ms/step - loss: 0.5453 - accuracy:
0.7500
Epoch 2/25
2/2 [==============================] - 0s 10ms/step - loss: 0.5434 - accuracy:
0.7500
Epoch 3/25
2/2 [==============================] - 0s 9ms/step - loss: 0.5460 - accuracy:
0.7500
Epoch 4/25
2/2 [==============================] - 0s 9ms/step - loss: 0.5414 - accuracy:
0.7500
Epoch 24/25
2/2 [==============================] - 0s 6ms/step - loss: 0.5237 - accuracy:
0.7500
Epoch 25/25
2/2 [==============================] - 0s 6ms/step - loss: 0.5360 - accuracy:
0.7500
<keras.callbacks.History at 0x7fc976c0f5d0>
Epoch 1/25
/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/adam.py:110:
UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
1/1 [==============================] - 1s 572ms/step - loss: 1.3274 - accuracy:
0.5000
Epoch 2/25
1/1 [==============================] - 0s 9ms/step - loss: 1.0633 - accuracy:
0.5000
Epoch 3/25
1/1 [==============================] - 0s 11ms/step - loss: 0.9294 - accuracy:
0.5000
Epoch 4/25
1/1 [==============================] - 0s 10ms/step - loss: 0.8377 - accuracy:
0.500
Epoch 24/25
1/1 [==============================] - 0s 8ms/step - loss: 0.3705 - accuracy:
1.0000
Epoch 25/25
1/1 [==============================] - 0s 10ms/step - loss: 0.3600 - accuracy:
1.0000
<keras.callbacks.History at 0x7fc976af1a10>
RESULT:
Hence, perfomed a hyper tuning parameter in ANN model successfully.
EX.NO: 6
DATE: 26/8/22
Implement LVQ Network for Pattern Classification
AIM:
To implement LVQ network for pattern classification in python program.
ALGORITHM (OR) PROCEDURE:
import math
class LVQ :
# Function here computes the winning vector
# by Euclidean distance
def winner( self, weights, sample ) :
D0 = 0
D1 = 0
for i in range( len( sample ) ) :
D0 = D0 + math.pow( ( sample[i] - weights[0][i] ), 2 )
D1 = D1 + math.pow( ( sample[i] - weights[1][i] ), 2 )
if D0 > D1 :
return 0
else :
return 1
# Function here updates the winning vector
def update( self, weights, sample, J, alpha, actual ) :
if actual -- J:
for i in range(len(weights)) :
weights[J][i] = weights[J][i] + alpha * ( sample[i] - weights[J][i] )
else:
for i in range(len(weights)) :
weights[J][i] = weights[J][i] - alpha * ( sample[i] - weights[J][i] )
# Driver code
def main() :
# Training Samples ( m, n ) with their class vector
X = [[ 0, 0, 1, 1 ],
[ 1, 0, 0, 0 ],
[ 0, 0, 0, 1 ],
[ 0, 1, 1, 0 ],
[ 1, 1, 0, 0 ],
[ 1, 1, 1, 0 ],]
Y = [ 0, 1, 0, 1, 1, 1 ]
m, n = len( X ), len( X[0] )
# weight initialization ( n, c )
weights = []
weights.append( X.pop( 0 ) )
weights.append( X.pop( 0 ) )
# Samples used in weight initialization will
# not use in training
m=m-2
Y.pop(0)
Y.pop(0)
# training
ob = LVQ()
epochs = 3
alpha = 0.1
for i in range( epochs ) :
for j in range( m ) :
# Sample selection
T = X[j]
# Compute winner
J = ob.winner( weights, T )
# Update weights
ob.update( weights, T, J, alpha , Y[j])
# classify new input sample
T = [ 0, 0, 1, 0 ]
J = ob.winner( weights, T )
print( "Sample T belongs to class : ", J )
print( "Trained weights : ", weights )
if __name__ == "__main__":
main()
OUTPUT:
Sample T belongs to class : 1
Trained weights : [[0.40951000000000004, 0.40951000000000004, 1, 1], [0.5782969,
0.321949, 0, 0]]
RESULT:
Therefore, implemented LVQ network for pattern classification successfully.
EX.NO: 7
DATE: 7/9/22
Work on a text classification problem with Keras API Dataset for Neural Network
AIM:
To write a python program to work on a text classification problem with keras API for
dataset for neural network.
ALGORITHM (OR) PROCEDURE:
# Importing the Libraries
import keras
from tensorflow.keras.datasets import mnist
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout, InputLayer
from sklearn.metrics import accuracy_score, confusion_matrix
# Keras datasets are
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Plot a single sample
import random
import matplotlib.pyplot as plt
plt.imshow(X_train[random.randint(0, 59999)], cmap=plt.cm.binary)
# We can get our training data and testing data between 0 and 1 by dividing by the maximum
of the data (Normalisation)
X_train = X_train/255.0
X_test = X_test/255.0
len(X_train)
len(X_test)
X_train = X_train.reshape(60000, 28, 28, 1) #(batch_size, height, width, channels)
X_test = X_test.reshape(10000, 28, 28, 1)
y_train = tf.one_hot(y_train, depth = 10) # Depth = 10 , because the no. of classes = 10
y_test = tf.one_hot(y_test, depth = 10)
# Using Keras Sequenital API to build our model.
model = Sequential()
# Adding layers to our model
model.add(Conv2D(32, kernel_size = (3, 3), activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Flatten())
# model.add(Dropout(0.25))
model.add(Dense(10, activation = 'softmax'))
model.compile(optimizer = 'adam', # optimizer = tf.keras.optimizers.Adam(lr =0
.001)
loss = 'categorical_crossentropy',
metrics= 'accuracy')
his = model.fit(X_train, y_train, epochs =5, validation_split = 0.3) # validation_split - to use
part of training data as validation data
# Evaluate the Model on test data
model.evaluate(X_test, y_test)
OUTPUT:
<matplotlib.image.AxesImage at 0x7fa253ff5d90>
60000
10000
Epoch 1/5
1313/1313 [==============================] - 18s 5ms/step - loss: 0.2613 -
accuracy: 0.9216 - val_loss: 0.1059 - val_accuracy: 0.9689
Epoch 2/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.0871 -
accuracy: 0.9743 - val_loss: 0.0794 - val_accuracy: 0.9756
Epoch 3/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.0623 -
accuracy: 0.9807 - val_loss: 0.0682 - val_accuracy: 0.9796
Epoch 4/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.0500 -
accuracy: 0.9842 - val_loss: 0.0532 - val_accuracy: 0.9846
Epoch 5/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.0392 -
accuracy: 0.9870 - val_loss: 0.0836 - val_accuracy: 0.9746
313/313 [==============================] - 1s 3ms/step - loss: 0.0825 - accuracy:
0.9751
[0.08253466337919235, 0.9750999808311462]
RESULT:
Therefore, text classification problem with keras API for dataset for neural network done
successfully.
EX.NO: 8
DATE: 12/9/22
Implement Batch Normalization and gauge its performance
AIM:
To write a python program on implementing batch normalization and gauge its
performance.
ALGORITHM (OR) PROCEDURE:
# Importing the Libraries
import keras
from tensorflow.keras.datasets import fashion_mnist
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout, InputLayer,
BatchNormalization
# Keras datasets are
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
# We can get our training data and testing data between 0 and 1 by dividing by the maximum
of the data (Normalisation)
X_train = X_train/255.0
X_test = X_test/255.0
X_train = X_train.reshape(60000, 28, 28, 1) #(batch_size, height, width, channels)
X_test = X_test.reshape(10000, 28, 28, 1)
y_train = tf.one_hot(y_train, depth = 10) # Depth = 10 , because the no. of classes = 10
y_test = tf.one_hot(y_test, depth = 10)
# Using Keras Sequenital API to build our model.
model = Sequential()
# Adding layers to our model
model.add(Conv2D(32, kernel_size = (3, 3), activation ='relu'))
model.add(MaxPool2D(2, 2))
BatchNormalization(axis =1)
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
BatchNormalization(axis =1)
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Flatten())
# model.add(Dropout(0.25))
model.add(Dense(10, activation = 'softmax'))
model.compile(optimizer = 'adam', # optimizer = tf.keras.optimizers.Adam(lr
=0.001)
loss = 'categorical_crossentropy',
metrics= 'accuracy')
his = model.fit(X_train, y_train, epochs =5, validation_split = 0.3) # validation_split - to use
part of training data as validation data
# Evaluate the Model on test data
model.evaluate(X_test, y_test)
OUTPUT:
Epoch 1/5
1313/1313 [==============================] - 54s 40ms/step - loss: 0.6351 -
accuracy: 0.7688 - val_loss: 0.4949 - val_accuracy: 0.8204
Epoch 2/5
1313/1313 [==============================] - 52s 39ms/step - loss: 0.4419 -
accuracy: 0.8395 - val_loss: 0.4244 - val_accuracy: 0.8433
Epoch 3/5
1313/1313 [==============================] - 53s 40ms/step - loss: 0.3837 -
accuracy: 0.8598 - val_loss: 0.3759 - val_accuracy: 0.8611
Epoch 4/5
1313/1313 [==============================] - 61s 46ms/step - loss: 0.3494 -
accuracy: 0.8736 - val_loss: 0.3769 - val_accuracy: 0.8596
Epoch 5/5
1313/1313 [==============================] - 54s 41ms/step - loss: 0.3175 -
accuracy: 0.8821 - val_loss: 0.3411 - val_accuracy: 0.8757
313/313 [==============================] - 4s 13ms/step - loss: 0.3648 - accuracy:
0.8686
[0.3648488223552704, 0.8686000108718872]
RESULT:
Therefore, implemented batch normalization and gauge its performance successfully.
EX.NO: 9
DATE: 19/9/22
Using Keras, perform rate adaption schedule
AIM:
To write a python program using keras to perform rate adaptation schedule.
ALGORITHM (OR) PROCEDURE:
# Importing the Libraries
import keras
from tensorflow.keras.datasets import fashion_mnist
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout, InputLayer
from sklearn.metrics import accuracy_score, confusion_matrix
# Keras datasets are
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
# Plot a single sample
import random
import matplotlib.pyplot as plt
plt.imshow(X_train[random.randint(0, 59999)], cmap=plt.cm.binary)
# We can get our training data and testing data between 0 and 1 by dividing by the maximum
of the data (Normalisation)
X_train = X_train/255.0
X_test = X_test/255.0
X_train = X_train.reshape(60000, 28, 28, 1) #(batch_size, height, width, channels)
X_test = X_test.reshape(10000, 28, 28, 1)
y_train = tf.one_hot(y_train, depth = 10) # Depth = 10 , because the no. of classes = 10
y_test = tf.one_hot(y_test, depth = 10)
# Using Keras Sequenital API to build our model.
model = Sequential()
# Adding layers to our model
model.add(Conv2D(32, kernel_size = (3, 3), activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Flatten())
# model.add(Dropout(0.25))
model.add(Dense(10, activation = 'softmax'))
epochs = 20
batch_size = 64
# Learning Rate Scheduler
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(lambda epoch:1e-3
*10**(epoch/20))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,
epsilon=1e-08, decay=0.0),
metrics=['accuracy'])
history = model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size,
callbacks = [lr_scheduler])
epochs = 20
batch_size = 64
# fit CNN model using Adagrad optimizer
model1 = model
model1.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0),
metrics=['accuracy'])
history1 = model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size,
verbose=2)
# fit CNN model using Adadelta optimizer
model2 = model
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0),
metrics=['accuracy'])
history2 = model2.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size,
verbose=2)
# fit CNN model using RMSprop optimizer
model3 = model
model3.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08,
decay=0.0),
metrics=['accuracy'])
history3 = model3.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size,
verbose=2)
# fit CNN model using Adam optimizer
model4 = model
model4.compile(loss=keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,
epsilon=1e-08, decay=0.0),
metrics=['accuracy'])
history4 = model4.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size,
verbose=2)
fig = plt.figure(figsize=(12,8))
plt.plot(range(epochs),history.history['val_accuracy'],label='Adam with Lr Scheduler')
plt.plot(range(epochs),history1.history['val_accuracy'],label='Adagrad')
plt.plot(range(epochs),history2.history['val_accuracy'],label='Adadelta')
plt.plot(range(epochs),history3.history['val_accuracy'],label='RMSprop')
plt.plot(range(epochs),history4.history['val_accuracy'],label='Adam')
plt.legend(loc=0)
plt.xlabel('epochs')
plt.xlim([0,epochs])
plt.ylabel('accuracy om validation set')
plt.grid(True)
plt.title("Comparing Model Accuracy")
plt.show()
OUTPUT:
<matplotlib.image.AxesImage at 0x7f0bd7290910>
Epoch 1/20
938/938 [==============================] - 5s 5ms/step - loss: 0.4929 - accuracy:
0.8215 - val_loss: 0.3938 - val_accuracy: 0.8578 - lr: 0.0010
Epoch 2/20
938/938 [==============================] - 5s 5ms/step - loss: 0.3504 - accuracy:
0.8740 - val_loss: 0.3532 - val_accuracy: 0.8765 - lr: 0.0011
Epoch 3/20
938/938 [==============================] - 4s 4ms/step - loss: 0.3079 - accuracy:
0.8890 - val_loss: 0.3228 - val_accuracy: 0.8853 - lr: 0.0013
Epoch 4/20
938/938 [==============================] - 4s 4ms/step - loss: 0.2838 - accuracy:
0.8960 - val_loss: 0.2961 - val_accuracy: 0.8934 - lr: 0.0014
Epoch 5/20
938/938 [==============================] - 4s 4ms/step - loss: 0.2622 - accuracy:
0.9045 - val_loss: 0.2900 - val_accuracy: 0.8932 - lr: 0.0016
Epoch 20/20
938/938 [==============================] - 4s 4ms/step - loss: 0.2632 - accuracy:
0.9048 - val_loss: 0.3826 - val_accuracy: 0.8719 - lr: 0.0089
Epoch 1/20
938/938 - 3s - loss: 0.1854 - accuracy: 0.9311 - val_loss: 0.3328 - val_accuracy: 0.8895 -
3s/epoch - 4ms/step
Epoch 2/20
938/938 - 3s - loss: 0.1669 - accuracy: 0.9381 - val_loss: 0.3327 - val_accuracy: 0.8956 -
3s/epoch - 3ms/step
Epoch 3/20
938/938 - 3s - loss: 0.1584 - accuracy: 0.9413 - val_loss: 0.3243 - val_accuracy: 0.8946 -
3s/epoch - 3ms/step
Epoch 19/20
938/938 - 3s - loss: 0.0183 - accuracy: 0.9952 - val_loss: 0.6999 - val_accuracy: 0.8966 -
3s/epoch - 3ms/step
Epoch 20/20
938/938 - 3s - loss: 0.0166 - accuracy: 0.9958 - val_loss: 0.7077 - val_accuracy: 0.8929 -
3s/epoch - 3ms/step
RESULT:
Hence, program using keras to perform rate adaptation schedule completed successfully.
EX.NO: 10
DATE: 3/10/22
Build a CNN model for Image Classification
AIM:
To write a python program for building a CNN model foe image classification.
ALGORITHM (OR) PROCEDURE:
# Importing the Libraries
import keras
from tensorflow.keras.datasets import fashion_mnist
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout, InputLayer
from sklearn.metrics import accuracy_score, confusion_matrix
# Keras datasets are
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
# Plot a single sample
import random
import matplotlib.pyplot as plt
plt.imshow(X_train[random.randint(0, 59999)], cmap=plt.cm.binary)
# We can get our training data and testing data between 0 and 1 by dividing by the maximum
of the data (Normalisation)
X_train = X_train/255.0
X_test = X_test/255.0
len(X_train)
len(X_test)
X_train = X_train.reshape(60000, 28, 28, 1) #(batch_size, height, width, channels)
X_test = X_test.reshape(10000, 28, 28, 1)
X_train = X_train.reshape(60000, 28, 28, 1) #(batch_size, height, width, channels)
X_test = X_test.reshape(10000, 28, 28, 1)
# Using Keras Sequenital API to build our model.
model = Sequential()
# Adding layers to our model
model.add(Conv2D(32, kernel_size = (3, 3), activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Conv2D(64, kernel_size = 3, activation ='relu'))
model.add(MaxPool2D(2, 2))
model.add(Flatten())
# model.add(Dropout(0.25))
model.add(Dense(10, activation = 'softmax'))
model.compile(optimizer = 'adam', # optimizer = tf.keras.optimizers.Adam(lr
=0.001)
loss = 'categorical_crossentropy',
metrics= 'accuracy')
his = model.fit(X_train, y_train, epochs =5, validation_split = 0.3) # validation_split - to use
part of training data as validation data
# Evaluate the Model on test data
model.evaluate(X_test, y_test)
OUTPUT:
<matplotlib.image.AxesImage at 0x7f0c5b31d7d0>
60000
10000
Epoch 1/5
1313/1313 [==============================] - 8s 6ms/step - loss: 0.5282 -
accuracy: 0.8104 - val_loss: 0.4134 - val_accuracy: 0.8538
Epoch 2/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.3572 -
accuracy: 0.8715 - val_loss: 0.3319 - val_accuracy: 0.8812
Epoch 3/5
1313/1313 [==============================] - 7s 6ms/step - loss: 0.3145 -
accuracy: 0.8870 - val_loss: 0.3253 - val_accuracy: 0.8821
Epoch 4/5
1313/1313 [==============================] - 7s 6ms/step - loss: 0.2843 -
accuracy: 0.8990 - val_loss: 0.2948 - val_accuracy: 0.8946
Epoch 5/5
1313/1313 [==============================] - 6s 5ms/step - loss: 0.2626 -
accuracy: 0.9041 - val_loss: 0.2854 - val_accuracy: 0.8979
313/313 [==============================] - 1s 3ms/step - loss: 0.2993 - accuracy:
0.8917
[0.29927781224250793, 0.891700029373169]
RESULT:
Therefore, building a CNN model for image classification done successfully.
EX.NO: 11
DATE: [12/10/22]
Build a DL model for diabetes classification problem
AIM:
To build a deep learning model for diabetes classification problem.
ALGORITHM (OR) PROCEDURE:
# Import required libraries
import tensorflow as tf
from keras.layers import Dense
import pandas as pd
# Get dataset
!wget https://raw.githubusercontent.com/npradaschnor/Pima-Indians-Diabetes-Dataset/
master/diabetes.csv
df = pd.read_csv('/content/diabetes.csv')
# Shuffle our Data
df = df.sample(frac = 1, random_state = 42)
df.head()
for i in df.columns:
df[i] = df[i].apply(lambda row: float(row))
train_data = df.drop('Outcome', axis = 1)
labels = df['Outcome']
# Split data into train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train_data, labels, test_size = 0.3,
random_state = 42)
# One hot encode the labels
y_train = tf.one_hot(y_train, depth = 2)
y_test = tf.one_hot(y_test, depth = 2)
# Standardize our data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Creating our Model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=(8,)), # input shape required
tf.keras.layers.Dense(15, activation="relu"),
tf.keras.layers.Dense(2, activation = 'sigmoid')
])
# Compile
model.compile(loss = tf.keras.losses.BinaryCrossentropy(),
metrics = ['accuracy'],
optimizer = tf.keras.optimizers.SGD())
# Fit the model
model.fit(X_train, y_train, epochs = 200)
# Evaluate our model
model.evaluate(X_test, y_test)
OUTPUT:
Epoch 1/200
17/17 [==============================] - 1s 3ms/step - loss: 0.7533 - accuracy:
0.3706
Epoch 2/200
17/17 [==============================] - 0s 6ms/step - loss: 0.7451 - accuracy:
0.3985
Epoch 3/200
17/17 [==============================] - 0s 6ms/step - loss: 0.7377 - accuracy:
0.4153
Epoch 199/200
17/17 [==============================] - 0s 6ms/step - loss: 0.4326 - accuracy:
0.7747
Epoch 200/200
17/17 [==============================] - 0s 6ms/step - loss: 0.4325 - accuracy:
0.7747
<keras.callbacks.History at 0x7fa278c7ff10>
8/8 [==============================] - 0s 3ms/step - loss: 0.4945 - accuracy:
0.7965
[0.49452218413352966, 0.7965368032455444]
RESULT:
Hence,builded a deep learning model for diabetes classification problem successfully.
EX.NO: 12
DATE: 21/10/22
Design and build a Game environment
AIM:
To write a python program to design and build a gaming environment.
ALGORITHM (OR) PROCEDURE:
!apt-get install -y xvfb python-opengl > /dev/null 2>&1
!pip install gym pyvirtualdisplay > /dev/null 2>&1
pip install gym[classic_control]
import gym
import numpy as np
import matplotlib.pyplot as plt
from IPython import display as ipythondisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(400, 300))
display.start()
env = gym.make("CartPole-v0")
env.reset()
prev_screen = env.render(mode='rgb_array')
plt.imshow(prev_screen)
for i in range(100):
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
screen = env.render(mode='rgb_array')
plt.imshow(screen)
ipythondisplay.clear_output(wait=True)
ipythondisplay.display(plt.gcf())
if done:
break
ipythondisplay.clear_output(wait=True)
env.close()
display.stop()
!mkdir shen
!ls
OUTPUT:
<pyvirtualdisplay.display.Display at 0x7fc198181a50>
<pyvirtualdisplay.display.Display at 0x7fc198181a50>
sample_data shen
RESULT:
Therefore, a gaming environment has been designed and built successfully.