0% found this document useful (0 votes)
46 views12 pages

Final .Ipynb - Colab

The document outlines a Python script for training a neural network using TensorFlow and LightGBM on a dataset loaded from a CSV file. It includes steps for data preprocessing, handling missing values, splitting the dataset, and training the model while monitoring its performance. The script also installs necessary packages and saves the processed data for future use.

Uploaded by

Waqar Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views12 pages

Final .Ipynb - Colab

The document outlines a Python script for training a neural network using TensorFlow and LightGBM on a dataset loaded from a CSV file. It includes steps for data preprocessing, handling missing values, splitting the dataset, and training the model while monitoring its performance. The script also installs necessary packages and saves the processed data for future use.

Uploaded by

Waqar Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

5/13/25, 11:27 AM Final .

ipynb - Colab

!pip install tensorflow


!pip install lightgbm

Collecting tensorflow 
Downloading tensorflow-2.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.1 kB)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (1.4.0)
Collecting astunparse>=1.6.0 (from tensorflow)
Downloading astunparse-1.6.3-py2.py3-none-any.whl.metadata (4.4 kB)
Collecting flatbuffers>=24.3.25 (from tensorflow)
Downloading flatbuffers-25.2.10-py2.py3-none-any.whl.metadata (875 bytes)
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /usr/local/lib/python3.11/dist-packages (from tensorflow)
Collecting google-pasta>=0.1.1 (from tensorflow)
Downloading google_pasta-0.2.0-py3-none-any.whl.metadata (814 bytes)
Collecting libclang>=13.0.0 (from tensorflow)
Downloading libclang-18.1.1-py2.py3-none-manylinux2010_x86_64.whl.metadata (5.2 kB)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (3.4.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.11/dist-packages (from tensorflow) (25.0)
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.3 in /usr/local/lib
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (2.32.3)
Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages (from tensorflow) (75.2.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (1.17.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (3.1.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (4.13.2)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (1.17.2)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (1.71.0)
Collecting tensorboard~=2.19.0 (from tensorflow)
Downloading tensorboard-2.19.0-py3-none-any.whl.metadata (1.8 kB)
Requirement already satisfied: keras>=3.5.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (3.8.0)
Requirement already satisfied: numpy<2.2.0,>=1.26.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (2.0.2)
Requirement already satisfied: h5py>=3.11.0 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (3.13.0)
Requirement already satisfied: ml-dtypes<1.0.0,>=0.5.1 in /usr/local/lib/python3.11/dist-packages (from tensorflow) (0.5.1)
Collecting tensorflow-io-gcs-filesystem>=0.23.1 (from tensorflow)
Downloading tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Collecting wheel<1.0,>=0.23.0 (from astunparse>=1.6.0->tensorflow)
Downloading wheel-0.45.1-py3-none-any.whl.metadata (2.3 kB)
Requirement already satisfied: rich in /usr/local/lib/python3.11/dist-packages (from keras>=3.5.0->tensorflow) (14.0.0)
Requirement already satisfied: namex in /usr/local/lib/python3.11/dist-packages (from keras>=3.5.0->tensorflow) (0.0.9)
Requirement already satisfied: optree in /usr/local/lib/python3.11/dist-packages (from keras>=3.5.0->tensorflow) (0.15.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2.21.0->ten
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2.21.0->tensorflow) (3
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2.21.0->tensorflo
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2.21.0->tensorflo
Requirement already satisfied: markdown>=2.6.8 in /usr/lib/python3/dist-packages (from tensorboard~=2.19.0->tensorflow) (3.3.6)
Collecting tensorboard-data-server<0.8.0,>=0.7.0 (from tensorboard~=2.19.0->tensorflow)
Downloading tensorboard_data_server-0.7.2-py3-none-manylinux_2_31_x86_64.whl.metadata (1.1 kB)
Collecting werkzeug>=1.0.1 (from tensorboard~=2.19.0->tensorflow)
Downloading werkzeug-3.1.3-py3-none-any.whl.metadata (3.7 kB)
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.11/dist-packages (from werkzeug>=1.0.1->tensorboard~=2
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich->keras>=3.5.0->tensorf
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/dist-packages (from rich->keras>=3.5.0->tenso
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich->keras>=3
Downloading tensorflow-2.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (644.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 644.9/644.9 MB 1.6 MB/s eta 0:00:00
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Downloading flatbuffers-25.2.10-py2.py3-none-any.whl (30 kB)
Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.5/57.5 kB 4.1 MB/s eta 0:00:00
Downloading libclang-18.1.1-py2.py3-none-manylinux2010_x86_64.whl (24.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 80.3 MB/s eta 0:00:00
Downloading tensorboard-2.19.0-py3-none-any.whl (5.5 MB) 
 

import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
import lightgbm as lgb
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay

df = pd.read_csv('all_data_with_features.csv') # Replace with your dataset path

# Check for missing values in the target variable 'label'


if df['label'].isnull().sum() > 0:
print(f"Found {df['label'].isnull().sum()} NaN values in the target variable. Dropping rows with NaN values...")
df = df.dropna(subset=['label'])
else:
print("No NaN values found in the target variable.")

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 1/12
5/13/25, 11:27 AM Final .ipynb - Colab

No NaN values found in the target variable.

X = df.drop(['hash', 'label'], axis=1) # Adjust if 'hash' is not present


y = df['label']

!pip install joblib==1.3.2

Collecting joblib==1.3.2
Downloading joblib-1.3.2-py3-none-any.whl.metadata (5.4 kB)
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 4.9 MB/s eta 0:00:00
Installing collected packages: joblib
Attempting uninstall: joblib
Found existing installation: joblib 1.5.0
Uninstalling joblib-1.5.0:
Successfully uninstalled joblib-1.5.0
S f ll i t ll d j blib 1 3 2
 

import joblib

joblib.dump(X, 'X.pkl')

['X.pkl']

joblib.dump(y, 'y.pkl')

['y.pkl']

# Check for NaN values in features and handle them


if X.isnull().sum().sum() > 0:
print(f"Found {X.isnull().sum().sum()} NaN values in the features. Filling NaN values with the column mean...")
X = X.fillna(X.mean())

# Check for NaN values in features and handle them


if X.isnull().sum().sum() > 0:
print(f"Found {X.isnull().sum().sum()} NaN values in the features. Filling NaN values with the column mean...")
X = X.fillna(X.mean())

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Standardize the data


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

nn_model = Sequential([
Dense(128, input_shape=(X_train.shape[1],), activation='relu'),
Dropout(0.3),
Dense(64, activation='relu'),
Dropout(0.3),
Dense(32, activation='relu'),
Dense(1, activation='sigmoid') # Output layer with 1 neuron for binary classification
])

/usr/local/lib/python3.11/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` arg


super().__init__(activity_regularizer=activity_regularizer, **kwargs)

 

# Compile the model


nn_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Train the Neural Network model


nn_model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.2, verbose=1)

# Make predictions with Neural Network


y_pred_nn = (nn_model.predict(X_test) > 0.5).astype(int)

Epoch 1/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 8s 310ms/step - accuracy: 0.7184 - loss: 1.1055 - val_accuracy: 0.8493 - val_loss: 0.6163
Epoch 2/20

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 2/12
5/13/25, 11:27 AM Final .ipynb - Colab
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 267ms/step - accuracy: 0.7957 - loss: 0.9176 - val_accuracy: 0.8973 - val_loss: 0.3289
Epoch 3/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 267ms/step - accuracy: 0.8509 - loss: 0.3849 - val_accuracy: 0.9247 - val_loss: 0.2274
Epoch 4/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 274ms/step - accuracy: 0.9130 - loss: 0.3331 - val_accuracy: 0.9315 - val_loss: 0.1698
Epoch 5/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 276ms/step - accuracy: 0.9041 - loss: 0.3258 - val_accuracy: 0.9452 - val_loss: 0.1449
Epoch 6/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 274ms/step - accuracy: 0.9226 - loss: 0.2282 - val_accuracy: 0.9247 - val_loss: 0.1609
Epoch 7/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 278ms/step - accuracy: 0.9209 - loss: 0.2500 - val_accuracy: 0.9589 - val_loss: 0.0931
Epoch 8/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 274ms/step - accuracy: 0.9494 - loss: 0.1884 - val_accuracy: 0.9178 - val_loss: 0.1382
Epoch 9/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 273ms/step - accuracy: 0.9577 - loss: 0.1065 - val_accuracy: 0.9315 - val_loss: 0.1176
Epoch 10/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 278ms/step - accuracy: 0.9595 - loss: 0.1754 - val_accuracy: 0.9521 - val_loss: 0.0784
Epoch 11/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 270ms/step - accuracy: 0.9667 - loss: 0.0879 - val_accuracy: 0.9521 - val_loss: 0.1038
Epoch 12/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 273ms/step - accuracy: 0.9605 - loss: 0.1766 - val_accuracy: 0.9247 - val_loss: 0.1650
Epoch 13/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 277ms/step - accuracy: 0.9705 - loss: 0.0856 - val_accuracy: 0.9315 - val_loss: 0.0867
Epoch 14/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 273ms/step - accuracy: 0.9718 - loss: 0.0712 - val_accuracy: 0.9589 - val_loss: 0.0987
Epoch 15/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 275ms/step - accuracy: 0.9790 - loss: 0.0496 - val_accuracy: 0.9521 - val_loss: 0.0787
Epoch 16/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 270ms/step - accuracy: 0.9832 - loss: 0.0477 - val_accuracy: 0.9521 - val_loss: 0.0658
Epoch 17/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 268ms/step - accuracy: 0.9770 - loss: 0.0980 - val_accuracy: 0.9452 - val_loss: 0.1127
Epoch 18/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 271ms/step - accuracy: 0.9749 - loss: 0.1046 - val_accuracy: 0.9589 - val_loss: 0.0912
Epoch 19/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 264ms/step - accuracy: 0.9821 - loss: 0.0879 - val_accuracy: 0.9452 - val_loss: 0.1025
Epoch 20/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 5s 265ms/step - accuracy: 0.9327 - loss: 0.1582 - val_accuracy: 0.9315 - val_loss: 0.1547
6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step

# Evaluate the Neural Network model


print("Neural Network Classification Report:")
print(classification_report(y_test, y_pred_nn))
nn_accuracy = accuracy_score(y_test, y_pred_nn)
print("Neural Network Accuracy Score:", nn_accuracy)

Neural Network Classification Report:


precision recall f1-score support

0 0.95 0.98 0.97 125


1 0.96 0.89 0.93 57

accuracy 0.96 182


macro avg 0.96 0.94 0.95 182
weighted avg 0.96 0.96 0.96 182

Neural Network Accuracy Score: 0.9560439560439561

lgb_model = lgb.LGBMClassifier(random_state=42)

lgb_model.fit(X_train, y_train)

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 3/12
5/13/25, 11:27 AM Final .ipynb - Colab

[LightGBM] [Info] Number of positive: 254, number of negative: 474


[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.198732 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 22809
[LightGBM] [Info] Number of data points in the train set: 728, number of used features: 7603
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.348901 -> initscore=-0.623873
[LightGBM] [Info] Start training from score -0.623873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
▾ LGBMClassifier i

LGBMClassifier(random state=42)
 

# Make predictions with LightGBM


y_pred_lgb = lgb_model.predict(X_test)
lgb_cm = confusion_matrix(y_test, y_pred_lgb)

/usr/local/lib/python3.11/dist-packages/sklearn/utils/validation.py:2739: UserWarning: X does not have valid feature names, but LGBM
warnings.warn(

 

# Evaluate the LightGBM model


print("LightGBM Classification Report:")
print(classification_report(y_test, y_pred_lgb))
lgb_accuracy = accuracy_score(y_test, y_pred_lgb)
print("LightGBM Accuracy Score:", lgb_accuracy)

LightGBM Classification Report:


precision recall f1-score support

0 0.97 0.98 0.97 125


1 0.95 0.93 0.94 57

accuracy 0.96 182


macro avg 0.96 0.95 0.96 182
weighted avg 0.96 0.96 0.96 182

LightGBM Accuracy Score: 0.9615384615384616

intermediate_model = Sequential(nn_model.layers[:-1])
X_train_nn_features = intermediate_model.predict(X_train)
X_test_nn_features = intermediate_model.predict(X_test)

23/23 ━━━━━━━━━━━━━━━━━━━━ 1s 17ms/step


6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step

lgb_model.fit(X_train_nn_features, y_train)

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 4/12
5/13/25, 11:27 AM Final .ipynb - Colab

[LightGBM] [Info] Number of positive: 254, number of negative: 474


[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001205 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 2960
[LightGBM] [Info] Number of data points in the train set: 728, number of used features: 28
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.348901 -> initscore=-0.623873
[LightGBM] [Info] Start training from score -0.623873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
▾ LGBMClassifier i

LGBMClassifier(random state=42)
 

y_pred_hybrid = lgb_model.predict(X_test_nn_features)

/usr/local/lib/python3.11/dist-packages/sklearn/utils/validation.py:2739: UserWarning: X does not have valid feature names, but LGBM
warnings.warn(

 

print("Hybrid Model (Neural Network + LightGBM) Classification Report:")


print(classification_report(y_test, y_pred_hybrid))
hybrid_accuracy = accuracy_score(y_test, y_pred_hybrid)
print("Hybrid Model Accuracy Score:", hybrid_accuracy)

Hybrid Model (Neural Network + LightGBM) Classification Report:


precision recall f1-score support

0 0.94 0.99 0.96 125


1 0.98 0.86 0.92 57

accuracy 0.95 182


macro avg 0.96 0.93 0.94 182
weighted avg 0.95 0.95 0.95 182

Hybrid Model Accuracy Score: 0.9505494505494505

!pip install xgboost

Collecting xgboost
Downloading xgboost-3.0.0-py3-none-manylinux_2_28_x86_64.whl.metadata (2.1 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.11/dist-packages (from xgboost) (2.0.2)
Collecting nvidia-nccl-cu12 (from xgboost)
Downloading nvidia_nccl_cu12-2.26.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
Requirement already satisfied: scipy in /usr/local/lib/python3.11/dist-packages (from xgboost) (1.15.2)
Downloading xgboost-3.0.0-py3-none-manylinux_2_28_x86_64.whl (253.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 253.9/253.9 MB 4.3 MB/s eta 0:00:00
Downloading nvidia_nccl_cu12-2.26.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (318.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 318.1/318.1 MB 3.4 MB/s eta 0:00:00
Installing collected packages: nvidia-nccl-cu12, xgboost
Successfully installed nvidia-nccl-cu12-2.26.5 xgboost-3.0.0

import xgboost as xgb


from sklearn.metrics import accuracy_score, classification_report

dtrain = xgb.DMatrix(X_train, label=y_train)


dtest = xgb.DMatrix(X_test, label=y_test)

params = {
'objective': 'binary:logistic', # For binary classification
'eval_metric': 'logloss',

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 5/12
5/13/25, 11:27 AM Final .ipynb - Colab
'max_depth': 3,
'eta': 0.1,
}

xgb_model = xgb.train(params, dtrain, num_boost_round=100) # Adjust num_boost_round as needed

y_pred_xgb = xgb_model.predict(dtest)
y_pred_xgb = (y_pred_xgb > 0.5).astype(int)

print("XGBoost Classification Report:")


print(classification_report(y_test, y_pred_xgb))
xgb_accuracy = accuracy_score(y_test, y_pred_xgb)
print("XGBoost Accuracy Score:", xgb_accuracy)

XGBoost Classification Report:


precision recall f1-score support

0 0.97 0.97 0.97 125


1 0.93 0.93 0.93 57

accuracy 0.96 182


macro avg 0.95 0.95 0.95 182
weighted avg 0.96 0.96 0.96 182

XGBoost Accuracy Score: 0.9560439560439561

dtrain_nn_features = xgb.DMatrix(X_train_nn_features, label=y_train)


dtest_nn_features = xgb.DMatrix(X_test_nn_features, label=y_test)

params = {
'objective': 'binary:logistic',
'eval_metric': 'logloss',
'max_depth': 3,
'eta': 0.1,
}

xgb_model_hybrid = xgb.train(params, dtrain_nn_features, num_boost_round=100)

y_pred_hybrid_xgb = xgb_model_hybrid.predict(dtest_nn_features)
y_pred_hybrid_xgb = (y_pred_hybrid_xgb > 0.5).astype(int)

print("Hybrid Model (Neural Network + XGBoost) Classification Report:")


print(classification_report(y_test, y_pred_hybrid_xgb))
hybrid_xgb_accuracy = accuracy_score(y_test, y_pred_hybrid_xgb)
print("Hybrid Model (Neural Network + XGBoost) Accuracy Score:", hybrid_xgb_accuracy)

Hybrid Model (Neural Network + XGBoost) Classification Report:


precision recall f1-score support

0 0.94 0.99 0.96 125


1 0.98 0.86 0.92 57

accuracy 0.95 182


macro avg 0.96 0.93 0.94 182
weighted avg 0.95 0.95 0.95 182

Hybrid Model (Neural Network + XGBoost) Accuracy Score: 0.9505494505494505

Start coding or generate with AI.

!pip install scikit-learn

Requirement already satisfied: scikit-learn in /usr/local/lib/python3.11/dist-packages (1.6.1)


Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (2.0.2)
Requirement already satisfied: scipy>=1.6.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (1.15.2)
Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (1.3.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.11/dist-packages (from scikit-learn) (3.6.0)

Start coding or generate with AI.

from sklearn.feature_selection import mutual_info_classif


from sklearn.feature_selection import SelectKBest
from sklearn.tree import DecisionTreeClassifier

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 6/12
5/13/25, 11:27 AM Final .ipynb - Colab

# Calculate Information Gain


information_gain = mutual_info_classif(X, y)
ig_df = pd.DataFrame({'Feature': X.columns, 'Information Gain': information_gain})
ig_df = ig_df.sort_values(by=['Information Gain'], ascending=False)

# Feature Selection using Information Gain with Threshold


threshold_ig = 0.05 # Adjust the threshold value
selected_features_ig = ig_df[ig_df['Information Gain'] > threshold_ig]['Feature'].tolist()
X_new_ig = X[selected_features_ig]
print("Selected Features (Information Gain with Threshold):", selected_features_ig)

Selected Features (Information Gain with Threshold): ['Landroid/os/Parcel;->dataSize', 'Landroid/content/pm/PackageManager;->hasSyst

 

X_new_ig.head()

Landroid/os/Parcel;- Landroid/content/pm/PackageManager;- Landroid/app/Activity;- Landroid/app/PendingIntent;- Ljava/util


>dataSize >hasSystemFeature >startIntentSenderForResult >getIntentSender

0 1 1 1 1

1 0 1 0 0

2 0 1 0 0

3 0 1 0 0

4 1 1 1 1

5 rows × 3467 columns

 

X_train_ig, X_test_ig, y_train_ig, y_test_ig = train_test_split(X_new_ig, y, test_size=0.2, random_state=42)

# Standardize the data (for Neural Network)


scaler_ig = StandardScaler()
X_train_ig_scaled = scaler_ig.fit_transform(X_train_ig)
X_test_ig_scaled = scaler_ig.transform(X_test_ig)

# Create and train the LightGBM model


lgb_model_ig = lgb.LGBMClassifier(random_state=42)
lgb_model_ig.fit(X_train_ig, y_train_ig)

# Make predictions
y_pred_lgb_ig = lgb_model_ig.predict(X_test_ig)

# Evaluate the model


print("LightGBM (Information Gain) Classification Report:")
print(classification_report(y_test_ig, y_pred_lgb_ig))
lgb_accuracy_ig = accuracy_score(y_test_ig, y_pred_lgb_ig)
print("LightGBM (Information Gain) Accuracy Score:", lgb_accuracy_ig)

[LightGBM] [Info] Number of positive: 254, number of negative: 474


[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.037520 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6114
[LightGBM] [Info] Number of data points in the train set: 728, number of used features: 3057
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.348901 -> initscore=-0.623873
[LightGBM] [Info] Start training from score -0.623873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 7/12
5/13/25, 11:27 AM Final .ipynb - Colab
LightGBM (Information Gain) Classification Report:
precision recall f1-score support

0 0.97 0.97 0.97 125


1 0.93 0.93 0.93 57

accuracy 0.96 182


macro avg 0.95 0.95 0.95 182
weighted avg 0.96 0.96 0.96 182

LightGBM (Information Gain) Accuracy Score: 0.9560439560439561

# Create the Neural Network model


nn_model_ig = Sequential([
Dense(128, input_shape=(X_train_ig_scaled.shape[1],), activation='relu'),
Dropout(0.3),
Dense(64, activation='relu'),
Dropout(0.3),
Dense(32, activation='relu'),
Dense(1, activation='sigmoid') # Output layer with 1 neuron for binary classification
])

# Compile the model


nn_model_ig.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

/usr/local/lib/python3.11/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` arg


super().__init__(activity_regularizer=activity_regularizer, **kwargs)

 

nn_model_ig.fit(X_train_ig_scaled, y_train_ig, epochs=20, batch_size=32, validation_split=0.2, verbose=1)

Epoch 1/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 2s 19ms/step - accuracy: 0.7383 - loss: 0.6294 - val_accuracy: 0.8699 - val_loss: 0.3306
Epoch 2/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8268 - loss: 0.4471 - val_accuracy: 0.8767 - val_loss: 0.2882
Epoch 3/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.7889 - loss: 0.5238 - val_accuracy: 0.8973 - val_loss: 0.2250
Epoch 4/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8483 - loss: 0.3748 - val_accuracy: 0.8973 - val_loss: 0.2203
Epoch 5/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8792 - loss: 0.3299 - val_accuracy: 0.8904 - val_loss: 0.2271
Epoch 6/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9025 - loss: 0.2472 - val_accuracy: 0.9110 - val_loss: 0.1940
Epoch 7/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9224 - loss: 0.2177 - val_accuracy: 0.9521 - val_loss: 0.1581
Epoch 8/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9004 - loss: 0.2443 - val_accuracy: 0.9315 - val_loss: 0.1758
Epoch 9/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8730 - loss: 0.2822 - val_accuracy: 0.9521 - val_loss: 0.1371
Epoch 10/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9355 - loss: 0.2004 - val_accuracy: 0.9178 - val_loss: 0.2013
Epoch 11/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8975 - loss: 0.2798 - val_accuracy: 0.9247 - val_loss: 0.1470
Epoch 12/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - accuracy: 0.8933 - loss: 0.3025 - val_accuracy: 0.9247 - val_loss: 0.1734
Epoch 13/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9502 - loss: 0.1873 - val_accuracy: 0.9452 - val_loss: 0.1569
Epoch 14/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9448 - loss: 0.1471 - val_accuracy: 0.9521 - val_loss: 0.1739
Epoch 15/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9215 - loss: 0.1753 - val_accuracy: 0.9315 - val_loss: 0.1536
Epoch 16/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9360 - loss: 0.1641 - val_accuracy: 0.9315 - val_loss: 0.1249
Epoch 17/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9571 - loss: 0.1241 - val_accuracy: 0.9315 - val_loss: 0.1644
Epoch 18/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9326 - loss: 0.2006 - val_accuracy: 0.9658 - val_loss: 0.1243
Epoch 19/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9652 - loss: 0.1259 - val_accuracy: 0.9452 - val_loss: 0.1836
Epoch 20/20
19/19 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9498 - loss: 0.1110 - val_accuracy: 0.9521 - val_loss: 0.1081
<keras.src.callbacks.history.History at 0x7d307be7a910>

# Make predictions
y_pred_nn_ig = (nn_model_ig.predict(X_test_ig_scaled) > 0.5).astype(int)

# Evaluate the model


print("Neural Network (Information Gain) Classification Report:")
print(classification_report(y_test_ig, y_pred_nn_ig))
nn_accuracy_ig = accuracy_score(y_test_ig, y_pred_nn_ig)
print("Neural Network (Information Gain) Accuracy Score:", nn_accuracy_ig)

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 8/12
5/13/25, 11:27 AM Final .ipynb - Colab

WARNING:tensorflow:5 out of the last 13 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distrib


6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
Neural Network (Information Gain) Classification Report:
precision recall f1-score support

0 0.96 0.98 0.97 125


1 0.95 0.91 0.93 57

accuracy 0.96 182


macro avg 0.95 0.94 0.95 182
weighted avg 0.96 0.96 0.96 182

Neural Network (Information Gain) Accuracy Score: 0.9560439560439561

 

# Extract features from the intermediate layer of the Neural Network


intermediate_model_ig = Sequential(nn_model_ig.layers[:-1]) # Exclude the output layer
X_train_nn_features_ig = intermediate_model_ig.predict(X_train_ig_scaled)
X_test_nn_features_ig = intermediate_model_ig.predict(X_test_ig_scaled)

WARNING:tensorflow:5 out of the last 13 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distrib


23/23 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step

 

# Train LightGBM on the extracted features


lgb_model_hybrid_ig = lgb.LGBMClassifier(random_state=42)
lgb_model_hybrid_ig.fit(X_train_nn_features_ig, y_train_ig)

# Make predictions
y_pred_hybrid_ig = lgb_model_hybrid_ig.predict(X_test_nn_features_ig)

# Evaluate the hybrid model


print("Hybrid Model (Neural Network + LightGBM, Information Gain) Classification Report:")
print(classification_report(y_test_ig, y_pred_hybrid_ig))
hybrid_accuracy_ig = accuracy_score(y_test_ig, y_pred_hybrid_ig)
print("Hybrid Model (Neural Network + LightGBM, Information Gain) Accuracy Score:", hybrid_accuracy_ig)

[LightGBM] [Info] Number of positive: 254, number of negative: 474


[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001490 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 2860
[LightGBM] [Info] Number of data points in the train set: 728, number of used features: 27
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.348901 -> initscore=-0.623873
[LightGBM] [Info] Start training from score -0.623873
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
Hybrid Model (Neural Network + LightGBM, Information Gain) Classification Report:
precision recall f1-score support

0 0.97 0.99 0.98 125


1 0.98 0.93 0.95 57

accuracy 0.97 182


macro avg 0.98 0.96 0.97 182
weighted avg 0.97 0.97 0.97 182

Hybrid Model (Neural Network + LightGBM, Information Gain) Accuracy Score: 0.9725274725274725
/usr/local/lib/python3.11/dist-packages/sklearn/utils/validation.py:2739: UserWarning: X does not have valid feature names, but LGBM
warnings.warn(

 

from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay

cm_lgb = confusion_matrix(y_test_ig, y_pred_lgb_ig)

# Display confusion matrix


disp_lgb = ConfusionMatrixDisplay(confusion_matrix=cm_lgb, display_labels=['Class 0', 'Class 1'])
disp_lgb.plot(cmap='Blues')

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 9/12
5/13/25, 11:27 AM Final .ipynb - Colab
plt.title('LightGBM Confusion Matrix')
plt.show()

 

cm_nn = confusion_matrix(y_test_ig, y_pred_nn_ig)

# Display confusion matrix


disp_nn = ConfusionMatrixDisplay(confusion_matrix=cm_nn, display_labels=['Class 0', 'Class 1'])
disp_nn.plot(cmap='Blues')
plt.title('Neural Network Confusion Matrix')
plt.show()

 

Start coding or generate with AI.

# Assuming you have already trained and predicted using lgb_model_hybrid_ig


# Get confusion matrix for Hybrid model
cm_hybrid = confusion_matrix(y_test_ig, y_pred_hybrid_ig)

# Display confusion matrix


disp_hybrid = ConfusionMatrixDisplay(confusion_matrix=cm_hybrid, display_labels=['Class 0', 'Class 1'])
disp_hybrid.plot(cmap='Blues')
plt.title('Hybrid Model Confusion Matrix')
plt.show()

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 10/12
5/13/25, 11:27 AM Final .ipynb - Colab

 

Start coding or generate with AI.

# Create subplots for all three models


fig, axes = plt.subplots(1, 3, figsize=(15, 5))

# Plot LightGBM confusion matrix


disp_lgb = ConfusionMatrixDisplay(confusion_matrix=cm_lgb, display_labels=['Class 0', 'Class 1'])
disp_lgb.plot(ax=axes[0], cmap='Blues')
axes[0].set_title('LightGBM')

# Plot Neural Network confusion matrix


disp_nn = ConfusionMatrixDisplay(confusion_matrix=cm_nn, display_labels=['Class 0', 'Class 1'])
disp_nn.plot(ax=axes[1], cmap='Blues')
axes[1].set_title('Neural Network')

# Plot Hybrid model confusion matrix


disp_hybrid = ConfusionMatrixDisplay(confusion_matrix=cm_hybrid, display_labels=['Class 0', 'Class 1'])
disp_hybrid.plot(ax=axes[2], cmap='Blues')
axes[2].set_title('Hybrid Model')

# Adjust layout and display


plt.tight_layout()
plt.show()

!pip install matplotlib

Requirement already satisfied: matplotlib in /usr/local/lib/python3.11/dist-packages (3.10.0)


Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (1.3.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (4.57.0)
Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (1.4.8)
Requirement already satisfied: numpy>=1.23 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (2.0.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (25.0)
Requirement already satisfied: pillow>=8 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (11.2.1)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (3.2.3)

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 11/12
5/13/25, 11:27 AM Final .ipynb - Colab
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.11/dist-packages (from matplotlib) (2.9.0.post0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil>=2.7->matplotlib) (1.17.0)

import matplotlib.pyplot as plt


import numpy as np

# Model names and accuracies


models = ['LightGBM', 'Neural Network', 'Hybrid']
accuracies = [lgb_accuracy_ig, nn_accuracy_ig, hybrid_accuracy_ig]

# Set up the bar positions


x = np.arange(len(models))
width = 0.3 # Width of the bars

# Create the bar plot


fig, ax = plt.subplots()
rects = ax.bar(x, accuracies, width, label='Accuracy')

# Add labels, title, and legend


ax.set_ylabel('Accuracy')
ax.set_title('Model Accuracies Comparison')
ax.set_xticks(x)
ax.set_xticklabels(models)
ax.legend()

# Add value labels to the bars


def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')

autolabel(rects)

# Display the plot


plt.tight_layout()
plt.show()

https://colab.research.google.com/drive/18mNTPzttKbeCrF8KZQuiQixFDWaj_tzo#scrollTo=82aT6mIc-awl&printMode=true 12/12

You might also like