Design and Analysis of Algorithms
Lab Manual (KCS751A)
         Session- 2022-23(Odd Semester)
        Subject Name : AI Lab
        Subject Code : KCS751A
            Semester : 7th sem
        Faculty Name : Mrs. Sunita
        Submitted By : Abhi Goyal
              Roll no : 2000960100002
Department of Computer Science and Engineering
     Vishveshwarya Group of Institutions
   Affiliated to Dr.A. P. J. AKTU, Lucknow(UP)
                     List of Experiments
                                                         Teacher’s
S.No.   Program                                   Date   Signature
01.     Write a python program to implement
        simple Chat-bot.
02.     Implement Tic-Tac-Toe using A*
        algorithm.
03.     Implement alpha-beta pruning
        graphically with proper example and
        justify the pruning.
04.     Write a python program to implement
        Water Jug Problem.
05.     Use Heuristic Search Techniques to
        Implement Breadth first search (Best-
        Solution but not always optimal) and A*
        algorithm (Always gives optimal
        solution).
06.     Use Heuristic Search Techniques to
        Implement Hill-Climbing Algorithm.
07.     Write a python program to implement
        Simple Calculator program.
08.     Write a python program to POS (Parts of
        Speech) tagging for the given sentence
        using NLTK
09.     Implementation of Image features
        Processing using OPENCV AND OPEN
        VINO
10.     Write a program to implement Naïve
        Bayes Algorithm
                                     Program No: 1
                                SIMPLE CHAT- BOT
1.1 OBJECTIVE: Write a python program to implement simple Chat-bot
1.2 PROGRAM
   import nltk
   from nltk.chat.util import Chat, reflections
python
The code below shows that utility Chat is a class that provides logic for building the
chatbot.
   print(Chat)
Output:
    <class 'nltk.chat.util.Chat'>
The other import you did above was Reflections, which is a dictionary that contains a
set of input text and its corresponding output values. You can examine the dictionary
with the code below. This is an optional dictionary and you can create your own
dictionary in the same format as below.
   reflections
Output:
   {'i am': 'you are',
   'i was': 'you were',
   'i': 'you',
   "i'm": 'you are',
    "i'd": 'you would',
    "i've": 'you have',
    "i'll": 'you will',
    'my': 'your',
    'you are': 'I am',
    'you were': 'I was',
    "you've": "I have"
    "you'll": 'I will',
    'your': 'my',
    'yours': 'mine',
    'you': 'me',
    'me': 'you'
    }
 Building the Chatbot:
 The first step is to create rules that will be used to train the chatbot. The lines of code
 below create a simple set of rules. The first element of the list is the user input, whereas
 the second element is the response from the bot. Several such lists are created in
 the set_pairs object.
set_pairs
= [ [
     r"my name is (.*)",
     ["Hello %1, How are you doing today ?",]
  ],
  [
      r"hi|hey|hello", ["Hello", "Hey there",]
  ],
  [
      r"what is your name?",
       ["You can call me a chatbot ?",]
  ],
  [
        r"how are you ?",
        ["I am fine, thank you! How can i help you?",]
  ],
  [
        r"I am fine, thank you",
        ["great to hear that, how can i help you?",]
  ],
  [
        r"i'm (.*) doing good",
        ["That's great to hear","How can i help you?:)",]
  ],
  [
      r"i am looking for online guides and courses to learn data science, can you
suggest?",
     ["Pluralsight is a great option to learn data science. You can check their website",]
  ],
  [
      r"thanks for the suggestion. do they have great authors and instructors?", ["Yes,
they have the world class best authors, that is their strength;)",]
  ],
  [
      r"(.*) thank you so much, that was helpful",
     ["Iam happy to help", "No problem, you're welcome",]
  ]
]
 After creating the pairs of rules above, we define the chatbot using the code below. The
 code is simple and prints a message whenever the function is invoked.
def chatbot():
    print("Hi, I'm the chatbot you built")
chatbot()
 Output:
Hi, I'm the chatbot you built
 The next step is to instantiate the Chat() function containing the pairs and reflections.
chat = Chat(set_pairs, reflections)
print(chat)
 Output:
<nltk.chat.util.Chat object at 0x7f49c76e3be0>
 You have created a simple rule-based chatbot, and the last step is to initiate the
 conversation. This is done using the code below where the converse() function triggers
 the conversation.
chat.converse()
if   name       == "         main    ":
    chatbot()
 The code above will generate the following chatbox in your notebook, as shown in the
 image below.
 Output:
 You're ready to interact with the chatbot. Start by typing a simple greeting, "hi", in the
 box, and you'll get the response "Hello" from the bot, as shown in the image below.
 Output:
You can continue conversing with the Chabot and quit the conversation once you are
done, as shown in the image below.
Output:
                                Program No: 2
                      TIC-TAC-TOE USING A* ALGORITHM
 2.1 OBJECTIVE: Implement Tic-Tac-Toe using A* algorithm
 2.2 CODING:
# Tic-Tac-Toe Program using # random number in Python
# importing all necessary libraries import numpy as np
import random
from time import sleep
# Creates an empty board def create_board():
return(np.array(
    [[0, 0, 0],
     [0, 0, 0],
     [0, 0, 0]
]))
# Check for empty places on board
def possibilities(board):
    l = []
    for i in range(len(board)): for j in range(len(board)):
        if board[i][j] == 0: l.append((i, j))
    return(l)
# Select a random place for the player def random_place(board, player):
selection = possibilities(board) current_loc = random.choice(selection)
board[current_loc] = player return(board)
# Checks whether the player has three # of their marks in a horizontal row
def row_win(board, player): for x in range(len(board)):
    win = True
    for y in range(len(board)): if board[x, y] != player:
        win = False continue
        if win == True: return(win)
    return(win)
# Checks whether the player has three # of their marks in a vertical row
def col_win(board, player): for x in range(len(board)):
    win = True
    for y in range(len(board)): if board[y][x] != player:
        win = False continue
        if win == True: return(win)
    return(win)
# Checks whether the player has three # of their marks in a diagonal row
def diag_win(board, player): win = True
    y = 0
    for x in range(len(board)): if board[x, x] != player:
        win = False if win:
        return win == True
    if win:
        for x in range(len(board)):
            y = len(board) - 1 - x
            if board[x, y] != player: win = False
    return win
# Evaluates whether there is # a winner or a tie
def evaluate(board):
   winner = 0
   for player in [1, 2]:
       if (row_win(board, player) or col_win(board,player) or
diag_win(board,player)):
           winner = player
   if np.all(board != 0) and winner == 0: winner = -1
   return winner
# Main function to start the game
def play_game():
    board, winner, counter = create_board(), 0, 1 print(board)
    sleep(2)
    while winner == 0:
        for player in [1, 2]:
            board = random_place(board, player) print("Board after " +
                   str(counter) + " move") print(board)
            sleep(2)
            counter += 1
            winner = evaluate(board)
            if winner != 0:
            break
    return(winner)
# Driver Code
print("Winner is: " + str(play_game()))
 2.3 OUTPUT:
[[0 0 0]
[0 0 0]
[0 0 0]]
Board after 1 move
[[0 0 0]
[0 0 0]
[1 0 0]]
Board after 2 move
[[0 0 0]
[0 2 0]
[1 0 0]]
Board after 3 move
[[0 1 0]
[0 2 0]
[1 0 0]]
Board after 4 move
[[0 1 0]
[2 2 0]
[1 0 0]]
Board after 5 move
[[1 1 0]
[2 2 0]
[1 0 0]]
Board after 6 move
[[1 1 0]
[2 2 0]
[1 2 0]]
Board after 7 move
[[1 1 0]
[2 2 0]
[1 2 1]]
Board after 8 move
[[1 1 0]
[2 2 2]
[1 2 1]]
Winner is: 2
                                 Program No: 3
                             ALPHA-BETA PRUNING
 3.1  OBJECTIVE: Implement Alpha-beta pruning graphically with proper
 example and justify the pruning
 3.2 CODING:
# Python3 program to demonstrate # working of Alpha-Beta Pruning
# Initial values of Alpha and Beta
MAX, MIN = 1000, -1000
# Returns optimal value for current player #(Initially called for root and
maximizer)
def minimax(depth, nodeIndex, maximizingPlayer, values, alpha, beta):
# Terminating condition. i.e # leaf node is reached
    if depth == 3:
    return values[nodeIndex]
    if maximizingPlayer:
        best = MIN
        # Recur for left and right children
        for i in range(0, 2):
               val = minimax(depth + 1, nodeIndex * 2 + i, False, values,
alpha, beta)
               best = max(best, val)
               alpha = max(alpha, best)
           # Alpha Beta Pruning
           if beta <= alpha:
               break return best
           else:
               best = MAX
           # Recur for left and # right children
           for i in range(0, 2):
               val = minimax(depth + 1, nodeIndex * 2 + i,
                        True, values, alpha, beta)
               best = min(best, val) beta = min(beta, best)
          # Alpha Beta Pruning
          if beta <= alpha:
              break return best
# Driver Code
if   name   == " main ":
    values = [3, 5, 6, 9, 1, 2, 0, -1]
    print("The optimal value is :", minimax(0, 0, True, values, MIN, MAX))
 3.3 OUTPUT:
The optimal value is: 5
                                Program No: 4
                            WATER JUG PROBLEM
 4.1 OBJECTIVE: Write a python program to implement Water Jug Problem
 4.2 CODING:
# Python3 implementation of program to count
# minimum number of steps required to measure # d litre water using jugs
of m liters and n
# liters capacity.
def gcd(a, b):
    if b==0:
       return a
    return gcd(b, a%b)
''' fromCap -- Capacity of jug from which water is poured
toCap      -- Capacity of jug to which water is poured
d     -- Amount to be measured '''
def Pour(toJugCap, fromJugCap, d):
# Initialize current amount of water # in source and destination jugs
fromJug = fromJugCap
    toJug = 0
    # Initialize steps required
    step = 1
    while ((fromJug is not d) and (toJug is not d)):
        # Find the maximum amount that can be # poured
        temp = min(fromJug, toJugCap-toJug)
        # Pour 'temp' liter from 'fromJug' to 'toJug'
        toJug = toJug + temp
        fromJug = fromJug - temp
        step = step + 1
        if ((fromJug == d) or (toJug == d)): break
        # If first jug becomes empty, fill it if fromJug == 0:
        fromJug = fromJugCap step = step + 1
      # If second jug becomes full, empty it
       if toJug == toJugCap:
          toJug = 0
          step = step + 1
       return step
# Returns count of minimum steps needed to measure d liter
def minSteps(n, m, d): if m> n:
    temp = m
    m = n
    n = temp
    if (d%(gcd(n,m)) is not 0): return -1
# Return minimum two cases: a) Water of n liter jug is poured into    m
liter jug
return(min(Pour(n,m,d), Pour(m,n,d)))
# Driver code
if   name   == ' main ':
    n = 3
    m = 5
    d = 4
    print('Minimum number of steps required is ',minSteps(n, m, d))
 4.3 OUTPUT:
Minimum number of steps required is 6
                                Program No: 5
                       BREADTH FIRST SEARCH AND A*
 5.1   OBJECTIVE: Use Heuristic Search Techniques to Implement Best first
 search (Best-Solution but not always optimal) and A* algorithm (Always gives
 optimal solution)
 5.2 CODING: BFS
#BFS implementation in python
import collections class graph:
def   init (self,adjacency=None): if adjacency is None:
    adjacency = {} self.adjacency = adjacency
def bfs(graph, startnode):
    # Track the visited and unvisited nodes using queue
    seen, queue = set([startnode]), collections.deque([startnode])
    while queue:
        vertex = queue.popleft()
        print(vertex)
        for node in graph[vertex]:
            if node not in seen:    #checking if not visited
                seen.add(node) queue.append(node)
# The graph dictionary
adjacency = {
"a" : set(["b","c"]),
"b" : set(["a", "d"]),
"c" : set(["a", "d"]),
"d" : set(["e"]),
"e" : set(["a"])
}
bfs(adjacency, "a")
    5.3 OUTPUT: BFS
a
b
c
d
e
    5.4 PROGRAM: A*
def aStarAlgo(start_node, stop_node):
    open_set = set(start_node) closed_set = set()
    g = {} #store distance from starting node
    parents = {}# parents contains an adjacency map of all nodes
    #distance of starting node from itself is zero
    g[start_node] = 0
    #start_node is root node i.e it has no parent nodes so start_node is
set to its own parent node
    parents[start_node] = start_node
    while len(open_set) > 0: n = None
          #node with lowest f() is found
          for v in open_set:
              if n == None or g[v] + heuristic(v) < g[n] + heuristic(n): n =
v
        if n == stop_node or Graph_nodes[n] == None: pass
        else:
            for (m, weight) in get_neighbors(n):
            #nodes 'm' not in first and last set are added to first #n is
set its parent
                if m not in open_set and m not in closed_set:
                    open_set.add(m)
                    parents[m] = n
                    g[m] = g[n] + weight
              #for each node m,compare its distance from start i.e g(m) to
the #from start through n node
               else:
                   if g[m] > g[n] + weight:
                       #update g(m)
                       g[m] = g[n] + weight
                       #change parent of m to n
                       parents[m] = n
                       #if m in closed set,remove and add to open
                       if m in closed_set:
                           closed_set.remove(m) open_set.add(m)
        if n == None:
            print('Path does not exist!') return None
            # if the current node is the stop_node then
            # we begin reconstruction the path from it to the start_node
            if n == stop_node:
            path = []
            while parents[n] != n:
                path.append(n)
                n = parents[n]
            path.append(start_node)
            path.reverse()
            print('Path found: {}'.format(path)) return path
        # remove n from the open_list, and add it to closed_list # because
all of his neighbors were inspected
        open_set.remove(n)
        closed_set.add(n)
    print('Path does not exist!')
    return None
#define function to return neighbor and its distance #from the passed node
def get_neighbors(v):
    if v in Graph_nodes:
        return Graph_nodes[v] else:
    return None
#for simplicity we ll consider heuristic distances given #and this
function returns heuristic distance for all nodes def heuristic(n):
H_dist =   {
    'A':   11,
    'B':   6,
    'C':   99,
    'D':   1,
    'E':   7,
    'G':   0,
}
return H_dist[n]
#Describe your graph here
Graph_nodes = {
    'A': [('B', 2), ('E', 3)],
    'B': [('C', 1),('G', 9)],
    'C': None,
    'E': [('D', 6)],
    'D': [('G', 1)],
}
aStarAlgo('A', 'G')
 5.5 OUTPUT: A*
Path found: ['A', 'E', 'D', 'G']
                                     Program No: 6
                            HILL-CLIMBING ALGORITHM
 6.1    OBJECTIVE: Use Heuristic Search Techniques to Implement
 Hill-Climbing Algorithm
 6.2 PSEUDO CODE:
Pseudo code
i <- generate an individual randomly
best_so_far <- i
while stopping criterion has not been met:
get i''s bit string and convert it to the problem representation (int or
float) increment or decrement one of the genes by the step size
if the resulting individual has higher fitness
replace i with this individual and increase the step size
else
decrease the step size
if the step size reaches zero and increments and decrements of the current
gene have been tested
              move on to the next gene if
       i is at a local optimum
          if fitness(i) > fitness(best_so_far)
                      best_so_far <- i
i <- generate an individual randomly
 6.3 CODING:
# hill climbing search of a one-dimensional objective function
from   numpy import   asarray
from   numpy.random   import randn
from   numpy.random   import rand
from   numpy.random   import seed
# objective function
def objective(x):
    return x[0]**2.0
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size):
     # generate an initial point
    solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] -
bounds[:, 0])
     # evaluate the initial point
    solution_eval = objective(solution)
    # run the hill climb
    for i in range(n_iterations):
        # take a step
        candidate = solution + randn(len(bounds)) * step_size
        # evaluate candidate point
        candidte_eval = objective(candidate)
         # check if we should keep the new point
        if candidte_eval <= solution_eval:
            # store the new point
            solution, solution_eval = candidate, candidte_eval
            # report progress
            print('>%d f(%s) = %.5f' % (i, solution, solution_eval))
     return [solution, solution_eval]
# seed the pseudorandom number generator
seed(5)
# define range for input
bounds = asarray([[-5.0, 5.0]])
# define the total iterations
n_iterations = 1000
# define the maximum step size
step_size = 0.1
# perform the hill climbing search
best, score = hillclimbing(objective, bounds, n_iterations, step_size)
print('Done!')
print('f(%s) = %f' % (best, score))
 6.4 OUTPUT:
>1 f([-2.74290923]) = 7.52355
>3 f([-2.65873147]) = 7.06885
>4 f([-2.52197291]) = 6.36035
>5 f([-2.46450214]) = 6.07377
>7 f([-2.44740961]) = 5.98981
>9 f([-2.28364676]) = 5.21504
>12 f([-2.19245939]) = 4.80688
>14 f([-2.01001538]) = 4.04016
>15 f([-1.86425287]) = 3.47544
>22 f([-1.79913002]) = 3.23687
>24 f([-1.57525573]) = 2.48143
>25 f([-1.55047719]) = 2.40398
>26 f([-1.51783757]) = 2.30383
>27 f([-1.49118756]) = 2.22364
>28 f([-1.45344116]) = 2.11249
>30 f([-1.33055275]) = 1.77037
>32 f([-1.17805016]) = 1.38780
>33 f([-1.15189314]) = 1.32686
>36 f([-1.03852644]) = 1.07854
>37 f([-0.99135322]) = 0.98278
>38 f([-0.79448984]) = 0.63121
>39 f([-0.69837955]) = 0.48773
>42 f([-0.69317313]) = 0.48049
>46 f([-0.61801423]) = 0.38194
>48 f([-0.48799625]) = 0.23814
>50 f([-0.22149135]) = 0.04906
>54 f([-0.20017144]) = 0.04007
>57 f([-0.15994446]) = 0.02558
>60 f([-0.15492485]) = 0.02400
>61 f([-0.03572481]) = 0.00128
>64 f([-0.03051261]) = 0.00093
>66 f([-0.0074283]) = 0.00006
>78 f([-0.00202357]) = 0.00000
>119 f([0.00128373]) = 0.00000
>120 f([-0.00040911]) = 0.00000
>314 f([-0.00017051]) = 0.00000
Done!
f([-0.00017051]) = 0.000000
                                  Program No: 7
                         SIMPLE CALCULATOR PROGRAM
 7.1     OBJECTIVE: Write a python program to implement Simple Calculator
 program
 7.2 CODING:
def    add(num1, num2): return num1 + num2
def    subtract(num1, num2): return num1 - num2
def    multiply(num1, num2): return num1 * num2
def    divide(num1, num2): return num1 / num2
print("Please select operation -\n"
    \ "1. Add\n" \
      "2. Subtract\n" \
      "3. Multiply\n" \
      "4. Divide\n")
select = int(input("Select operations form 1, 2, 3, 4 :"))
number_1 = int(input("Enter first number: "))
number_2 = int(input("Enter second number: "))
if select == 1:
    print(number_1, "+", number_2, "=", add(number_1, number_2))
elif select == 2:
    print(number_1, "-", number_2, "=", subtract(number_1, number_2))
elif select == 3:
    print(number_1, "*", number_2, "=", multiply(number_1, number_2))
elif select == 4:
    print(number_1, "/", number_2, "=", divide(number_1, number_2))
else:
    print("Invalid input")
 7.3 OUTPUT:
Please select operation -
Add
Subtract
Multiply
Divide
Select operations form 1, 2, 3, 4 : 1 Enter first number : 15
Enter second number : 14 15 + 14 = 29
                                 Program No: 8
   POS (PARTS OF SPEECH) TAGGING FOR THE GIVEN SENTENCE USING NLTK
 8.1   OBJECTIVE: WAP to POS (Parts of Speech) tagging for the given sentence
 using NLTK.
 8.2 CODING:
#import the nltk library
import nltk
#define a text
sentence = "The man was excited after he was informed about his promotion
at work"
#tokenize the text
tokens = nltk.word_tokenize(sentence)
#Perform POS tagging
nltk.pos_tag(tokens)
 8.3   OUTPUT:
[('The', 'DT'),
('man', 'NN'),
('was', 'VBD'),
('excited', 'VBN'),
('after', 'IN'),
('he', 'PRP'),
('was', 'VBD'),
('informed', 'VBN'),
('about', 'IN'),
('his', 'PRP$'),
('promotion', 'NN'),
('at', 'IN'),
('work', 'NN')]
                               Program No: 9
       IMAGE FEATURES PROCESSING USING OPENCV AND OPEN VINO
 9.1 OBJECTIVE: Implementation of Image features Processing using
 OPENCV AND OPEN VINO
 9.2 CODING:
def preprocessing(input_image, height, width):
    image = cv2.resize(input_image, (width, height))
    image = image.transpose((2, 0, 1))
    image = image.reshape(1, 3, height, width)
    return image
def pose_estimation(input_image):
    preprocessed_image = np.copy(input_image)
    preprocessed_image = preprocessing(preprocessed_image, 256, 456)
    return preprocessed_image
def text_detection(input_image):
    preprocessed_image = np.copy(input_image)
    preprocessed_image = preprocessing(preprocessed_image, 768, 1280)
    return preprocessed_image
def car_meta(input_image):
    preprocessed_image = np.copy(input_image)
    preprocessed_image = preprocessing(preprocessed_image, 72, 72)
    return preprocessed_image
POSE_IMAGE = cv2.imread("sitting-on-car.jpg") TEXT_IMAGE =
cv2.imread("sign.jpg") CAR_IMAGE = cv2.imread("blue-car.jpg")
test_names = ["Pose Estimation", "Text Detection", "Car Meta"]
def set_solution_functions():
    global solution_funcs
    solution_funcs = {
        test_names[0]: pose_solution,
        test_names[1]: text_solution,
        test_names[2]: car_solution}
def pose_solution(input_image):
    return preprocessing(input_image, 256, 456)
def text_solution(input_image):
    return preprocessing(input_image, 768, 1280)
def car_solution(input_image):
    return preprocessing(input_image, 72, 72)
def test_pose():
    comparison = test(pose_estimation, test_names[0], POSE_IMAGE)
    return comparison
def test_text():
    comparison = test(text_detection, test_names[1], TEXT_IMAGE)
    return comparison
def test_car():
    comparison = test(car_meta, test_names[2], CAR_IMAGE)
    return comparison
def test(test_func, test_name, test_image):
    try:
         s_processed = test_func(test_image)
    except:
         print_exception(test_name)
         return
    solution = solution_funcs[test_name](test_image)
    comparison = np.array_equal(s_processed, solution)
    print_test_result(test_name, comparison)
    return comparison
def print_exception(test_name):
    print("Failed to run test on {}.".format(test_name))
    print("The code should be valid Python and return the preprocessed
image.")
def print_test_result(test_name, result):
    if result:
        print("Passed {} test.".format(test_name))
    else:
        print("Failed {}    test, did not   obtain    expected
preprocessed image.".format(test_name))
def feedback(tests_passed):
    print("You passed {} of 3 tests.".format(int(tests_passed)))
    if tests_passed == 3:
        print("Congratulations!")
    else:
        print("See above for additional feedback.")
 9.3 OUTPUT:
Passed Pose Estimation test.
Passed Text Detection test.
Passed Car Meta test.
You passed 3 of 3 tests.
Congratulations!
                               Program No: 10
                    IMPLEMENT NAÏVE BAYES ALGORITHM
 10.1 OBJECTIVE: Write a program to implement Naïve Bayes Algorithm
 10.2 CODING:
# Gaussian Naive Bayes from sklearn
import datasets from sklearn
import metrics from sklearn.naive_bayes
import GaussianNB # load the iris datasets
dataset = datasets.load_iris()
# fit a Naive Bayes model to the data
model = GaussianNB()
model.fit(dataset.data, dataset.target)
print(model)
# make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
 # summarize the fit of the model
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
10.3 OUTPUT:
GaussianNB()
             precision      recall f1-score support
         0        1.00      1.00     1.00          50
         1        0.94      0.94     0.94          50
         2        0.94      0.94     0.94          50
  accuracy                           0.96      150
 macro avg               0.96   0.96        0.96        150
weighted avg             0.96      0.96     0.96        150
[[50 0       0]
[ 0 47 3]
[ 0 3 47]]