0% found this document useful (0 votes)
24 views7 pages

DL 5

The document outlines an experiment conducted by Muskan Soni to implement a neural network using Python for classification and regression tasks. It details the aim, objectives, procedure, implementation steps, and learning outcomes, including model training and evaluation metrics. The experiment demonstrates the effectiveness of neural networks in solving problems like digit classification and value prediction.

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views7 pages

DL 5

The document outlines an experiment conducted by Muskan Soni to implement a neural network using Python for classification and regression tasks. It details the aim, objectives, procedure, implementation steps, and learning outcomes, including model training and evaluation metrics. The experiment demonstrates the effectiveness of neural networks in solving problems like digit classification and value prediction.

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Experiment – 5

Name: Muskan Soni UID: 22BCS16851


Branch: BE-CSE Section/Group: DL-902/B
Semester: 6th Date: 28-02-25
Subject: Deep Learning Subject Code: 22CSP-368

1. Aim: To implement a neural network using Python to perform classification or


regression tasks.
2. Objective:
• Understand the structure and working of a neural network.
• Build and train a neural network to solve a given problem (e.g., classification
of digits or predicting values).

3. Procedure/Algorithm

1. Define the Problem:


o Choose a dataset.
o Specify the input and output.
2. Preprocess the Data:
o Normalize input features.
o Split the dataset into training and test sets.
3. Build the Neural Network:
o Define the architecture (input, hidden, and output layers).
o Specify activation functions and loss function.
4. Train the Model:
o Use forward propagation, backpropagation, and optimization.
o Set hyperparameters like learning rate, batch size, and epochs.
5. Evaluate the Model:
o Measure performance using metrics like accuracy or loss.
6. Interpret Results:
o Analyze predictions and model performance.

4. Implementation:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import fetch_california_housing

housing = fetch_california_housing()
X, y = housing.data, housing.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = Sequential([
Dense(64, activation='relu', input_shape=(X.shape[1],)),
Dense(32, activation='relu'),
Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
model.fit(X_train, y_train, epochs=10, batch_size=2, validation_data=(X_test, y_test),
verbose=1)
loss, mae = model.evaluate(X_test, y_test)
print(f'Test MAE: {mae}')import java.io.BufferedReader;

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
np.random.seed(42)
X = np.random.rand(1000, 1)
y = 5 * X[:, 0] + np.random.randn(1000) * 0.1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = Sequential([
Dense(16, activation='relu', input_shape=(1,)),
Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
model.fit(X_train, y_train, epochs=50, batch_size=8, verbose=1)
loss, mae = model.evaluate(X_test, y_test)
print(f'Test MAE: {mae}')

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
np.random.seed(42)
n_samples = 1000
mileage = np.random.randint(5000, 200000, n_samples) # Mileage in km
year = np.random.randint(2000, 2023, n_samples) # Year of manufacture
condition = np.random.randint(1, 5, n_samples) # Condition rating (1 to 5)
brand = np.random.randint(0, 10, n_samples)
y = 50000 - (mileage * 0.05) + ((year - 2000) * 500) + (condition * 2000) + (brand * 1000) +
np.random.randn(n_samples) * 1000
X = np.column_stack((mileage, year, condition, brand))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = Sequential([
Dense(32, activation='relu', input_shape=(X.shape[1],)),
Dense(16, activation='relu'),
Dense(1) # Output layer for regression
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(X_train, y_train, epochs=50, batch_size=10, verbose=1,
validation_data=(X_test, y_test))
loss, mae = model.evaluate(X_test, y_test)
print(f'Test MAE: {mae}')
plt.figure(figsize=(10, 5))
plt.plot(history.history['mae'], label='Train MAE')
plt.plot(history.history['val_mae'], label='Validation MAE')
plt.xlabel('Epochs')
plt.ylabel('Mean Absolute Error')
plt.title('Model Training Performance')
plt.legend()
plt.show()

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, LabelEncoder
np.random.seed(42)
n_samples = 1000
age = np.random.randint(18, 80, n_samples) # Age of patient
weight = np.random.randint(50, 120, n_samples) # Weight in kg
pre_existing = np.random.randint(0, 2, n_samples) # 0: No, 1: Yes
lifestyle = np.random.randint(1, 5, n_samples)
y = np.digitize(5000 + (age * 50) + (weight * 10) + (pre_existing * 5000) + (lifestyle * 1000)
+ np.random.randn(n_samples) * 500, bins=[10000, 20000])
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(y)
X = np.column_stack((age, weight, pre_existing, lifestyle))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
model = Sequential([
Dense(32, activation='relu', input_shape=(X.shape[1],)),
Dense(16, activation='relu'),
Dense(3, activation='softmax') # Output layer for classification (3 categories)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

history = model.fit(X_train, y_train, epochs=50, batch_size=16, verbose=1,


validation_data=(X_test, y_test))

loss, accuracy = model.evaluate(X_test, y_test)


print(f'Test Accuracy: {accuracy}')

plt.figure(figsize=(10, 5))
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.title('Model Training Performance')
plt.legend()
plt.show()
5. Output:
6. Learning outcomes:
• Training Accuracy: Model learns over epochs, improving accuracy.
• Test Accuracy: Model generalizes well to unseen test data (e.g., ~98% for
MNIST).
• Learning Trends: Plots of accuracy and loss show convergence during training.
• Prediction: The neural network correctly classifies digits from the MNIST
dataset.

You might also like