0% found this document useful (0 votes)
11 views12 pages

Import As Import As Import As Import As Import As From Import From Import From Import

The document outlines the use of Python libraries such as pandas, numpy, and sklearn for data analysis and machine learning, specifically focusing on reading and cleaning datasets related to heart health and air quality. It demonstrates the process of loading CSV files, checking for missing values, and converting data types. The document also includes steps for data cleaning, such as dropping missing values and replacing commas with dots in numerical data.

Uploaded by

pratikpj509
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views12 pages

Import As Import As Import As Import As Import As From Import From Import From Import

The document outlines the use of Python libraries such as pandas, numpy, and sklearn for data analysis and machine learning, specifically focusing on reading and cleaning datasets related to heart health and air quality. It demonstrates the process of loading CSV files, checking for missing values, and converting data types. The document also includes steps for data cleaning, such as dropping missing values and replacing commas with dots in numerical data.

Uploaded by

pratikpj509
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

2B about:srcdoc

In [1]: import pandas as pd #Python library used for working with data sets
import numpy as np #Python library used for working with arrays
import seaborn as sn # library for making statistical graphics in Python
import random as rn
import matplotlib.pyplot as mat #used to create 2D graphs and plots by using python
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB,MultinomialNB
from sklearn.metrics import accuracy_score

In [2]: #GaussianNB is a classification technique used in


#Machine Learning (ML) based on the probabilistic approach and Gaussian distributio

#MultinomiaNB is a Naive Bayes classifier is suitable for


#classification with discrete features (e.g., word counts for text classification)

In [3]: DataFrame1=pd.read_csv('heart.csv') #Read a comma-separated values (csv) file into


DataFrame1

Out[3]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca

0 52 1 0 125 212 0 1 168 0 1.0 2 2

1 53 1 0 140 203 1 0 155 1 3.1 0 0

2 70 1 0 145 174 0 1 125 1 2.6 0 0

3 61 1 0 148 203 0 1 161 0 0.0 2 1

4 62 0 0 138 294 1 1 106 0 1.9 1 3

... ... ... ... ... ... ... ... ... ... ... ... ...

1020 59 1 1 140 221 0 1 164 1 0.0 2 0

1021 60 1 0 125 258 0 0 141 1 2.8 1 1

1022 47 1 0 110 275 0 0 118 1 1.0 1 1

1023 50 0 0 110 254 0 0 159 0 0.0 2 0

1024 54 1 0 120 188 0 1 113 0 1.4 1 1

1025 rows × 14 columns

In [4]: DataFrame2=pd.read_csv('AirQuality.csv',sep=';') #Read a comma-separated values (cs


DataFrame2

1 of 12 18/04/24, 14:50
2B about:srcdoc

Out[4]:
Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 10/03/2004 18.00.00 2,6 1360.0 150.0 11,9 1046.0

1 10/03/2004 19.00.00 2 1292.0 112.0 9,4 955.0

2 10/03/2004 20.00.00 2,2 1402.0 88.0 9,0 939.0

3 10/03/2004 21.00.00 2,2 1376.0 80.0 9,2 948.0

4 10/03/2004 22.00.00 1,6 1272.0 51.0 6,5 836.0

... ... ... ... ... ... ...

9466 NaN NaN NaN NaN NaN NaN NaN

9467 NaN NaN NaN NaN NaN NaN NaN

9468 NaN NaN NaN NaN NaN NaN NaN

9469 NaN NaN NaN NaN NaN NaN NaN

9470 NaN NaN NaN NaN NaN NaN NaN

9471 rows × 17 columns

In [5]: DataFrame1.isna().sum().sum() #Detect missing values for an array-like object


#.sum() sums up the numbers in the list

Out[5]: 0

In [6]: DataFrame1.dtypes

Out[6]: age int64


sex int64
cp int64
trestbps int64
chol int64
fbs int64
restecg int64
thalach int64
exang int64
oldpeak float64
slope int64
ca int64
thal int64
target int64
dtype: object

In [7]: DataFrame2.dtypes

2 of 12 18/04/24, 14:50
2B about:srcdoc

Out[7]: Date object


Time object
CO(GT) object
PT08.S1(CO) float64
NMHC(GT) float64
C6H6(GT) object
PT08.S2(NMHC) float64
NOx(GT) float64
PT08.S3(NOx) float64
NO2(GT) float64
PT08.S4(NO2) float64
PT08.S5(O3) float64
T object
RH object
AH object
Unnamed: 15 float64
Unnamed: 16 float64
dtype: object

DATA CLEANING
In [8]: DataFrame3=DataFrame2.iloc[:,:15] #iloc stands for “integer location”.
#It is used to select rows and columns from a Pandas DataFrame or a Series using in

In [9]: DataFrame3

Out[9]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 10/03/2004 18.00.00 2,6 1360.0 150.0 11,9 1046.0

1 10/03/2004 19.00.00 2 1292.0 112.0 9,4 955.0

2 10/03/2004 20.00.00 2,2 1402.0 88.0 9,0 939.0

3 10/03/2004 21.00.00 2,2 1376.0 80.0 9,2 948.0

4 10/03/2004 22.00.00 1,6 1272.0 51.0 6,5 836.0

... ... ... ... ... ... ...

9466 NaN NaN NaN NaN NaN NaN NaN

9467 NaN NaN NaN NaN NaN NaN NaN

9468 NaN NaN NaN NaN NaN NaN NaN

9469 NaN NaN NaN NaN NaN NaN NaN

9470 NaN NaN NaN NaN NaN NaN NaN

9471 rows × 15 columns

In [10]: DataFrame3.isna().sum().sum()

Out[10]: 1710

In [11]: DataFrame4=DataFrame3.dropna()

In [12]: DataFrame4

3 of 12 18/04/24, 14:50
2B about:srcdoc

Out[12]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 10/03/2004 18.00.00 2,6 1360.0 150.0 11,9 1046.0

1 10/03/2004 19.00.00 2 1292.0 112.0 9,4 955.0

2 10/03/2004 20.00.00 2,2 1402.0 88.0 9,0 939.0

3 10/03/2004 21.00.00 2,2 1376.0 80.0 9,2 948.0

4 10/03/2004 22.00.00 1,6 1272.0 51.0 6,5 836.0

... ... ... ... ... ... ...

9352 04/04/2005 10.00.00 3,1 1314.0 -200.0 13,5 1101.0

9353 04/04/2005 11.00.00 2,4 1163.0 -200.0 11,4 1027.0

9354 04/04/2005 12.00.00 2,4 1142.0 -200.0 12,4 1063.0

9355 04/04/2005 13.00.00 2,1 1003.0 -200.0 9,5 961.0

9356 04/04/2005 14.00.00 2,2 1071.0 -200.0 11,9 1047.0

9357 rows × 15 columns

In [13]: #Replacing the Object dtype of Date to Date dtype


DataFrame4['Date']=pd.to_datetime(DataFrame4['Date']);

C:\Users\rutur\AppData\Local\Temp/ipykernel_6896/3779120835.py:2: SettingW
ithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-doc


s/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DataFrame4['Date']=pd.to_datetime(DataFrame4['Date']);

4 of 12 18/04/24, 14:50
2B about:srcdoc

In [14]: DataFrame4

Out[14]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 2004-10-03 18.00.00 2,6 1360.0 150.0 11,9 1046.0

1 2004-10-03 19.00.00 2 1292.0 112.0 9,4 955.0

2 2004-10-03 20.00.00 2,2 1402.0 88.0 9,0 939.0

3 2004-10-03 21.00.00 2,2 1376.0 80.0 9,2 948.0

4 2004-10-03 22.00.00 1,6 1272.0 51.0 6,5 836.0

... ... ... ... ... ... ...

9352 2005-04-04 10.00.00 3,1 1314.0 -200.0 13,5 1101.0

9353 2005-04-04 11.00.00 2,4 1163.0 -200.0 11,4 1027.0

9354 2005-04-04 12.00.00 2,4 1142.0 -200.0 12,4 1063.0

9355 2005-04-04 13.00.00 2,1 1003.0 -200.0 9,5 961.0

9356 2005-04-04 14.00.00 2,2 1071.0 -200.0 11,9 1047.0

9357 rows × 15 columns

In [15]: #To Replace the Comma's with Dot


DataFrame4.replace(to_replace=',',value='.',regex=True,inplace=True)
DataFrame4

C:\Users\rutur\anaconda3\lib\site-packages\pandas\core\frame.py:5238: Sett
ingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-doc


s/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().replace(
Out[15]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 2004-10-03 18.00.00 2.6 1360.0 150.0 11.9 1046.0

1 2004-10-03 19.00.00 2 1292.0 112.0 9.4 955.0

2 2004-10-03 20.00.00 2.2 1402.0 88.0 9.0 939.0

3 2004-10-03 21.00.00 2.2 1376.0 80.0 9.2 948.0

4 2004-10-03 22.00.00 1.6 1272.0 51.0 6.5 836.0

... ... ... ... ... ... ...

9352 2005-04-04 10.00.00 3.1 1314.0 -200.0 13.5 1101.0

9353 2005-04-04 11.00.00 2.4 1163.0 -200.0 11.4 1027.0

9354 2005-04-04 12.00.00 2.4 1142.0 -200.0 12.4 1063.0

9355 2005-04-04 13.00.00 2.1 1003.0 -200.0 9.5 961.0

9356 2005-04-04 14.00.00 2.2 1071.0 -200.0 11.9 1047.0

9357 rows × 15 columns

5 of 12 18/04/24, 14:50
2B about:srcdoc

In [16]: DataFrame4.drop_duplicates(inplace=True)
DataFrame4 #Drop Duplicates

C:\Users\rutur\anaconda3\lib\site-packages\pandas\util\_decorators.py:311:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-doc


s/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return func(*args, **kwargs)
Out[16]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 2004-10-03 18.00.00 2.6 1360.0 150.0 11.9 1046.0

1 2004-10-03 19.00.00 2 1292.0 112.0 9.4 955.0

2 2004-10-03 20.00.00 2.2 1402.0 88.0 9.0 939.0

3 2004-10-03 21.00.00 2.2 1376.0 80.0 9.2 948.0

4 2004-10-03 22.00.00 1.6 1272.0 51.0 6.5 836.0

... ... ... ... ... ... ...

9352 2005-04-04 10.00.00 3.1 1314.0 -200.0 13.5 1101.0

9353 2005-04-04 11.00.00 2.4 1163.0 -200.0 11.4 1027.0

9354 2005-04-04 12.00.00 2.4 1142.0 -200.0 12.4 1063.0

9355 2005-04-04 13.00.00 2.1 1003.0 -200.0 9.5 961.0

9356 2005-04-04 14.00.00 2.2 1071.0 -200.0 11.9 1047.0

9357 rows × 15 columns

In [17]: DataFrame1

Out[17]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca

0 52 1 0 125 212 0 1 168 0 1.0 2 2

1 53 1 0 140 203 1 0 155 1 3.1 0 0

2 70 1 0 145 174 0 1 125 1 2.6 0 0

3 61 1 0 148 203 0 1 161 0 0.0 2 1

4 62 0 0 138 294 1 1 106 0 1.9 1 3

... ... ... ... ... ... ... ... ... ... ... ... ...

1020 59 1 1 140 221 0 1 164 1 0.0 2 0

1021 60 1 0 125 258 0 0 141 1 2.8 1 1

1022 47 1 0 110 275 0 0 118 1 1.0 1 1

1023 50 0 0 110 254 0 0 159 0 0.0 2 0

1024 54 1 0 120 188 0 1 113 0 1.4 1 1

1025 rows × 14 columns

6 of 12 18/04/24, 14:50
2B about:srcdoc

In [18]: DataFrame4

Out[18]: Date Time CO(GT) PT08.S1(CO) NMHC(GT) C6H6(GT) PT08.S2(NMHC)

0 2004-10-03 18.00.00 2.6 1360.0 150.0 11.9 1046.0

1 2004-10-03 19.00.00 2 1292.0 112.0 9.4 955.0

2 2004-10-03 20.00.00 2.2 1402.0 88.0 9.0 939.0

3 2004-10-03 21.00.00 2.2 1376.0 80.0 9.2 948.0

4 2004-10-03 22.00.00 1.6 1272.0 51.0 6.5 836.0

... ... ... ... ... ... ...

9352 2005-04-04 10.00.00 3.1 1314.0 -200.0 13.5 1101.0

9353 2005-04-04 11.00.00 2.4 1163.0 -200.0 11.4 1027.0

9354 2005-04-04 12.00.00 2.4 1142.0 -200.0 12.4 1063.0

9355 2005-04-04 13.00.00 2.1 1003.0 -200.0 9.5 961.0

9356 2005-04-04 14.00.00 2.2 1071.0 -200.0 11.9 1047.0

9357 rows × 15 columns

DATA INTEGRATION
In [19]: DataSet1=DataFrame4[['Date','Time','T','RH','AH']].loc[0:50]
DataSet1.head()

Out[19]: Date Time T RH AH

0 2004-10-03 18.00.00 13.6 48.9 0.7578

1 2004-10-03 19.00.00 13.3 47.7 0.7255

2 2004-10-03 20.00.00 11.9 54.0 0.7502

3 2004-10-03 21.00.00 11.0 60.0 0.7867

4 2004-10-03 22.00.00 11.2 59.6 0.7888

In [20]: DataSet2=DataFrame4[['Date','Time','T','RH','AH']].loc[51:100]
DataSet2.head()

Out[20]: Date Time T RH AH

51 2004-12-03 21.00.00 12.1 53.3 0.7536

52 2004-12-03 22.00.00 11.0 59.1 0.7740

53 2004-12-03 23.00.00 9.7 64.6 0.7771

54 2004-03-13 00.00.00 9.5 64.1 0.7597

55 2004-03-13 01.00.00 9.1 63.9 0.7423

In [21]: DataSet3=DataFrame1[['age','sex','cp','ca','target']].loc[50:100]

7 of 12 18/04/24, 14:50
2B about:srcdoc

DataSet3.head()

Out[21]: age sex cp ca target

50 58 0 3 0 1

51 57 0 0 0 0

52 38 1 2 4 1

53 49 1 2 3 0

54 55 1 0 0 0

In [22]: Merged=pd.concat([DataSet1,DataSet2])
Merged

Out[22]: Date Time T RH AH

0 2004-10-03 18.00.00 13.6 48.9 0.7578

1 2004-10-03 19.00.00 13.3 47.7 0.7255

2 2004-10-03 20.00.00 11.9 54.0 0.7502

3 2004-10-03 21.00.00 11.0 60.0 0.7867

4 2004-10-03 22.00.00 11.2 59.6 0.7888

... ... ... ... ... ...

96 2004-03-14 18.00.00 19.7 36.7 0.8307

97 2004-03-14 19.00.00 18.4 41.7 0.8732

98 2004-03-14 20.00.00 17.6 46.1 0.9210

99 2004-03-14 21.00.00 16.7 49.6 0.9320

100 2004-03-14 22.00.00 16.3 51.0 0.9341

101 rows × 5 columns

Data Transformation
In [23]: DataFrame1.loc[DataFrame1['sex']==1,'sex']='M' #Replacing 1 with M

In [24]: DataFrame1.loc[DataFrame1['sex']==0,'sex']='F' #Replacing 0 with F

8 of 12 18/04/24, 14:50
2B about:srcdoc

In [25]: DataFrame1.head()

Out[25]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal

0 52 M 0 125 212 0 1 168 0 1.0 2 2

1 53 M 0 140 203 1 0 155 1 3.1 0 0

2 70 M 0 145 174 0 1 125 1 2.6 0 0

3 61 M 0 148 203 0 1 161 0 0.0 2 1

4 62 F 0 138 294 1 1 106 0 1.9 1 3

In [26]: from sklearn.preprocessing import LabelEncoder


labelencoder=LabelEncoder()
DataFrame1["sex"]=labelencoder.fit_transform(DataFrame1["sex"])
DataFrame1 #used to encode categorical variables into numerical labels

Out[26]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca

0 52 1 0 125 212 0 1 168 0 1.0 2 2

1 53 1 0 140 203 1 0 155 1 3.1 0 0

2 70 1 0 145 174 0 1 125 1 2.6 0 0

3 61 1 0 148 203 0 1 161 0 0.0 2 1

4 62 0 0 138 294 1 1 106 0 1.9 1 3

... ... ... ... ... ... ... ... ... ... ... ... ...

1020 59 1 1 140 221 0 1 164 1 0.0 2 0

1021 60 1 0 125 258 0 0 141 1 2.8 1 1

1022 47 1 0 110 275 0 0 118 1 1.0 1 1

1023 50 0 0 110 254 0 0 159 0 0.0 2 0

1024 54 1 0 120 188 0 1 113 0 1.4 1 1

1025 rows × 14 columns

Error Correction
In [27]: DataFrame1[DataFrame1['ca']==4]

9 of 12 18/04/24, 14:50
2B about:srcdoc

Out[27]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca

52 38 1 2 138 175 0 1 173 0 0.0 2 4

83 38 1 2 138 175 0 1 173 0 0.0 2 4

128 52 1 2 138 223 0 1 169 0 0.0 2 4

208 38 1 2 138 175 0 1 173 0 0.0 2 4

242 38 1 2 138 175 0 1 173 0 0.0 2 4

290 52 1 2 138 223 0 1 169 0 0.0 2 4

340 38 1 2 138 175 0 1 173 0 0.0 2 4

348 43 1 0 132 247 1 0 143 1 0.1 1 4

417 52 1 2 138 223 0 1 169 0 0.0 2 4

428 43 1 0 132 247 1 0 143 1 0.1 1 4

465 38 1 2 138 175 0 1 173 0 0.0 2 4

521 58 1 1 125 220 0 1 144 0 0.4 1 4

597 38 1 2 138 175 0 1 173 0 0.0 2 4

743 58 1 1 125 220 0 1 144 0 0.4 1 4

749 58 1 1 125 220 0 1 144 0 0.4 1 4

831 58 1 1 125 220 0 1 144 0 0.4 1 4

970 38 1 2 138 175 0 1 173 0 0.0 2 4

993 43 1 0 132 247 1 0 143 1 0.1 1 4

In [28]: DataFrame1.loc[DataFrame1['ca']==4,'ca']=np.NaN #It locates the rows where the val

In [29]: DataFrame1 = DataFrame1.fillna(DataFrame1.median())

In [30]: DataFrame1.isnull().sum()

Out[30]: age 0
sex 0
cp 0
trestbps 0
chol 0
fbs 0
restecg 0
thalach 0
exang 0
oldpeak 0
slope 0
ca 0
thal 0
target 0
dtype: int64

In [31]: DataFrame1

10 of 12 18/04/24, 14:50
2B about:srcdoc

Out[31]: age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca

0 52 1 0 125 212 0 1 168 0 1.0 2 2.0

1 53 1 0 140 203 1 0 155 1 3.1 0 0.0

2 70 1 0 145 174 0 1 125 1 2.6 0 0.0

3 61 1 0 148 203 0 1 161 0 0.0 2 1.0

4 62 0 0 138 294 1 1 106 0 1.9 1 3.0

... ... ... ... ... ... ... ... ... ... ... ... ...

1020 59 1 1 140 221 0 1 164 1 0.0 2 0.0

1021 60 1 0 125 258 0 0 141 1 2.8 1 1.0

1022 47 1 0 110 275 0 0 118 1 1.0 1 1.0

1023 50 0 0 110 254 0 0 159 0 0.0 2 0.0

1024 54 1 0 120 188 0 1 113 0 1.4 1 1.0

1025 rows × 14 columns

Model Building
In [32]: X_train, X_test, y_train, y_test = train_test_split(DataFrame1.iloc[:,:-1

In [33]: X_train.shape, X_test.shape,y_train.shape

Out[33]: ((717, 13), (308, 13), (717,))

In [34]: gnb = GaussianNB()

In [35]: gnb.fit(X_train, y_train)


#fitting a Gaussian Naive Bayes (GNB) model on the training data

Out[35]: GaussianNB()

In [36]: y_pred = gnb.predict(X_test)


y_pred

11 of 12 18/04/24, 14:50
2B about:srcdoc

Out[36]: array([1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1,
1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0,
0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0,
1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1,
1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0,
1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0,
0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1,
1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0,
0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1,
1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0,
0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1,
0],
dtype=int64)

In [37]: print('Model accuracy score: {0:0.4f}'. format(accuracy_score(y_test, y_pred

Model accuracy score: 0.8571

Data Visualization
In [39]: dataplot = sn.heatmap(DataFrame4.corr(), cmap="Blues", annot=True)
mat.figure(figsize=(20,15))
mat.show() #Heatmap Plot provide a 2D array or a correlation matrix.

<Figure size 1440x1080 with 0 Axes>

In [ ]:

12 of 12 18/04/24, 14:50

You might also like