0% found this document useful (0 votes)
27 views17 pages

Lab Record 10-15

The document describes creating a weather dataset using the WEKA data mining tool. It includes applying preprocessing techniques like normalization to the dataset and then visualizing the weather data. Programs are written and successfully executed to demonstrate each step, with the final program visualizing the full weather dataset.

Uploaded by

Animesh Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views17 pages

Lab Record 10-15

The document describes creating a weather dataset using the WEKA data mining tool. It includes applying preprocessing techniques like normalization to the dataset and then visualizing the weather data. Programs are written and successfully executed to demonstrate each step, with the final program visualizing the full weather dataset.

Uploaded by

Animesh Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Create a Weather Table with the help of Data Mining Tool WEKA

@relation weather
@attribute outlook {sunny,rainy,overcast}
@attribute temparature numeric
@attribute humidity numeric
@attribute windy {true,false}
@attribute play {yes,no}
@data
sunny,85.0,85.0,false,no
overcast,80.0,90.0,true,no
sunny,83.0,86.0,false,yes
rainy,70.0,86.0,false,yes
rainy,68.0,80.0,false,yes
rainy,65.0,70.0,true,no
overcast,64.0,65.0,false,yes
sunny,72.0,95.0,true,no
sunny,69.0,70.0,false,yes
rainy,75.0,80.0,false,yes
Procedure:
Steps:

Result:
This program has been successfully executed
Apply Pre-Processing techniques to the training data set of Weather Table

@relation weather
@attribute outlook {sunny,rainy,overcast}
@attribute temparature numeric
@attribute humidity numeric
@attribute windy {true,false}
@attribute play {yes,no}
@data
sunny,85.0,85.0,false,no
overcast,80.0,90.0,true,no
sunny,83.0,86.0,false,yes
rainy,70.0,86.0,false,yesrainy,68.0,80.0,false,yes
rainy,65.0,70.0,true,no
overcast,64.0,65.0,false,yes
sunny,72.0,95.0,true,no
sunny,69.0,70.0,false,yes
rainy,75.0,80.0,false,yes
Procedure:

Pre-processing:

Result:
This program has been successfully executed
Write a program to demonstrate Visualization for Weather . arff.

@relation weather
@attribute outlook {sunny,rainy,overcast}
@attribute temperature numeric
@attribute humidity numeric
@attribute windy {true,false}
@attribute play {yes,no}
@data
sunny,85.0,85.0,false,no
overcast,80.0,90.0,true,no
sunny,83.0,86.0,false,yes
rainy,70.0,86.0,false,yes
rainy,68.0,80.0,false,yes
rainy,65.0,70.0,true,no
overcast,64.0,65.0,false,yes
sunny,72.0,95.0,true,no
sunny,69.0,70.0,false,yes
rainy,75.0,80.0,false,yes
Procedure:

Pre-processing:

Visualization of all Weather:


Write a program to demonstrate Visualization for Weather . arff.

@relation weather
@attribute outlook {sunny,rainy,overcast}
@attribute temparature numeric
@attribute humidity numeric
@attribute windy {true,false}
@attribute play {yes,no}
@data
sunny,85.0,85.0,false,no
overcast,80.0,90.0,true,no
sunny,83.0,86.0,false,yes
rainy,70.0,86.0,false,yes
rainy,68.0,80.0,false,yes
rainy,65.0,70.0,true,no
overcast,64.0,65.0,false,yes
sunny,72.0,95.0,true,nosunny,69.0,70.0,false,yes
rainy,75.0,80.0,false,yes
Procedure:
Steps:

Input Data:
1.Code Execution:

2. Click on Classify Button:


3. Select Bayes then choose NaiveBayes.

4. After that choose the Percentage split in Test options make it 66% or 70% then
click the START option.

5. After that got the Classifier Output:

Results/ Output:
=== Summary ===
Correctly Classified Instances 3 60 %
Incorrectly Classified Instances 2 40 %
Kappa statistic 0
Mean absolute error 0.5129
Root mean squared error 0.5706
Relative absolute error 108.5002 %
Root relative squared error 116.1441 %
Total Number of Instances 5
=== Confusion Matrix ===
a b <-- classified as
3 0 | a = yes
2 0 | b = no
Dataset:
@relation COVID-19

@attribute age numeric


@attribute sex {male, female} @attribute fever numeric
@attribute cough {yes, no}
@attribute sore_throat {yes, no}
@attribute difficulty_breathing {yes, no}
@attribute covid_test_result {positive, negative}

@data
35, male, 99.5, yes, no, no, positive
42, female, 98.8, yes, yes, yes, positive
28, male, 98.0, no, yes, yes, negative
50, female, 99.2, yes, no, no, negative
Description:
 The dataset "Covid.arff" contains information related to COVID-19 cases. It
includes various attributes such as age, gender, symptoms, test results, and
whether a patient tested positive or negative for COVID-19.
 We aim to use the J48 algorithm to create a decision tree that can predict the
likelihood of a person testing positive for COVID-19 based on these attributes.
Procedure :

Pre-processing:
Visualize all Attribute:

Result:
This program has been successfully executed.
Dataset:

@relation MarketBasketAnalysis

@attribute Item1 {0, 1}


@attribute Item2 {0, 1}
@attribute Item3 {0, 1}
@attribute Item4 {0, 1}
@attribute Item5 {0, 1}

@data
1,1,0,1,0
1,0,1,0,1
0,1,1,1,0
1,0,0,1,1
Description:
Association rule mining is a data mining technique used to find associations,
correlations, or patterns in large datasets. In this program, we will use the Apriori
algorithm, a classic algorithm for this task. We will apply it to the "Market-basket-
analysis.arff" dataset, which typically represents transactions ina retail environment
Procedure:

Pre-Processing and Visualization:


Associate Rule(Apply Apriori Algorithm):
Results/ Output:

=== Run information ===


Scheme: weka.associations.Apriori -N 10 -T 0 -C 0.9 -D 0.05 -U 1.0 -M 0.1 -S -
1.0 -c -1
Relation: MarketBasketAnalysis Instances: 35
Attributes: 5
Item1 Item2 Item3 Item4 Item5
=== Associator model (full training set) ===
Apriori
=======
Minimum support: 0.25 (9 instances)
Minimum metric <confidence>: 0.9
Number of cycles performed: 15
Generated sets of large itemsets:
Size of set of large itemsets L(1): 10
Size of set of large itemsets L(2): 24
Size of set of large itemsets L(3): 9

Best rules found:

1. Item1=0 16 ==> Item5=1 15 <conf:(0.94)> lift:(1.43) lev:(0.13) [4]


conv:(2.74)

2. Item2=0 16 ==> Item3=1 15 <conf:(0.94)> lift:(1.37) lev:(0.12) [4]


conv:(2.51)

3. Item3=1 Item4=1 13 ==> Item2=0 12 <conf:(0.92)> lift:(2.02) lev:(0.17) [6]


conv:(3.53)

4. Item2=0 Item4=1 13 ==> Item3=1 12 <conf:(0.92)> lift:(1.35) lev:(0.09) [3]


conv:(2.04)

5. Item5=0 12 ==> Item1=1 11 <conf:(0.92)> lift:(1.69) lev:(0.13) [4]


conv:(2.74)

6. Item1=0 Item3=1 12 ==> Item5=1 11 <conf:(0.92)> lift:(1.39) lev:(0.09) [3]


conv:(2.06)
7. Item3=0 11 ==> Item2=1 10 <conf:(0.91)> lift:(1.67) lev:(0.12)
[4]
conv:(2.51)
8. Item1=0 Item2=1 11 ==> Item5=1 10 <conf:(0.91)> lift:(1.38) lev:(0.08)
[2]
conv:(1.89)
9. Item1=1 Item2=0 11 ==> Item3=1 10 <conf:(0.91)> lift:(1.33) lev:(0.07)
[2]
conv:(1.73)
10. Item4=1 Item5=1 10 ==> Item1=0 9 <conf:(0.9)> lift:(1.97) lev:(0.13)
[4]conv:(2.71)

Result:
This program has been successfully executed.

You might also like