0% found this document useful (0 votes)
18 views49 pages

Practice Questions For ANN & FS

The document contains practice questions for an ANN and Fuzzy Systems exam, covering various topics including neural network fundamentals, back propagation, fuzzy logic, and their applications in robotics. Each unit presents specific scenarios requiring calculations, design, and analysis of neural networks and fuzzy logic systems. The questions involve practical applications such as terrain classification, robotic arm control, and obstacle avoidance using CNNs and fuzzy controllers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views49 pages

Practice Questions For ANN & FS

The document contains practice questions for an ANN and Fuzzy Systems exam, covering various topics including neural network fundamentals, back propagation, fuzzy logic, and their applications in robotics. Each unit presents specific scenarios requiring calculations, design, and analysis of neural networks and fuzzy logic systems. The questions involve practical applications such as terrain classification, robotic arm control, and obstacle avoidance using CNNs and fuzzy controllers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Practice Questions for ANN & FS Exam

Unit 1: Neural Network Fundamentals


Question 1: A mobile robot uses a perceptron network to classify
terrain based on three sensor inputs: surface roughness (x₁), slope
angle (x₂), and material hardness (x₃). The robot needs to determine
if the terrain is traversable.
Initial weights are w₁ = 0.2, w₂ = -0.3, w₃ = 0.4, and bias b = 0.1. The
training set includes the following patterns:
 Pattern 1: [0.8, 0.2, 0.6] (Traversable: t = +1)
 Pattern 2: [0.3, 0.9, 0.2] (Not Traversable: t = -1)
 Pattern 3: [0.7, 0.1, 0.8] (Traversable: t = +1)
 Pattern 4: [0.2, 0.8, 0.3] (Not Traversable: t = -1)
Using the perceptron learning rule with learning rate α = 0.1:
1. Perform two complete epochs of training
2. Draw the decision boundary after training
3. Test the trained perceptron on a new input [0.6, 0.3, 0.7]
Question 2: A warehouse robot uses a McCulloch-Pitts neural
network to determine whether to pick up boxes based on three
binary inputs:
 x₁: Box weight within capacity (1 = yes, 0 = no)
 x₂: Box dimensions within gripper range (1 = yes, 0 = no)
 x₃: Box material safe for gripper (1 = yes, 0 = no)
The robot should pick up a box only if it's within weight capacity AND
either within gripper range OR made of a safe material.

CHAITANYA BELEKAR 1
1. Design a McCulloch-Pitts neural network to implement this
logic
2. Determine the threshold value and weights
3. Verify your network with all possible input combinations
Unit 2: Back Propagation
Question 3: A robotic arm controller uses a 3-layer neural network to
predict the required torque based on joint position and velocity. The
network has:
 Input layer: 2 neurons (position x₁, velocity x₂)
 Hidden layer: 3 neurons with sigmoid activation
 Output layer: 1 neuron (torque) with linear activation
Given the training sample [x₁ = 0.5, x₂ = -0.3] with target output t =
0.8 and initial weights:
 W₁ (input to hidden): [[0.2, 0.3], [0.1, 0.4], [-0.2, 0.5]]
 W₂ (hidden to output): [0.3, -0.1, 0.2]
 Biases for hidden layer: [0.1, 0.2, -0.1]
 Bias for output layer: 0.1
Using a learning rate η = 0.2:
1. Perform one complete iteration of backpropagation
2. Calculate the updated weights and biases
3. Compute the error before and after weight update
Question 4: A Hopfield network is used in a robot's vision system to
recognize and correct distorted patterns of objects on a conveyor
belt. The system stores the following three patterns:
 Pattern A: [1, 1, -1, -1, 1] (Triangle)

CHAITANYA BELEKAR 2
 Pattern B: [1, -1, 1, -1, 1] (Square)
 Pattern C: [-1, 1, 1, 1, -1] (Circle)
1. Calculate the weight matrix for the Hopfield network
2. Test if the patterns are stable
3. If the input pattern [1, ?, 1, -1, ?] is presented (where ?
represents damaged sensor readings), determine which stored
pattern it will converge to using asynchronous updating
Unit 3: Fuzzy Set and Fuzzy Logic
Question 5: A robot navigation system uses fuzzy logic to determine
obstacle avoidance behavior based on two inputs:
 Distance to obstacle (D): {Very Near, Near, Medium, Far, Very
Far}
 Relative velocity of obstacle (V): {Approaching Fast,
Approaching Slow, Static, Moving Away Slow, Moving Away
Fast}
The membership functions are defined as follows:
 For Distance (D) in meters:
o Very Near: triangle (0, 0, 2)
o Near: triangle (0, 2, 4)
o Medium: triangle (2, 4, 6)
o Far: triangle (4, 6, 8)
o Very Far: triangle (6, 8, 10)
 For Velocity (V) in m/s:
o Approaching Fast: triangle (-5, -5, -2)
o Approaching Slow: triangle (-4, -2, 0)

CHAITANYA BELEKAR 3
o Static: triangle (-1, 0, 1)
o Moving Away Slow: triangle (0, 2, 4)
o Moving Away Fast: triangle (2, 5, 5)
For sensor readings of D = 3.5m and V = -1.5m/s:
1. Calculate the membership values for all linguistic variables

∪ B, A ∩ B, and the complement of A


2. If fuzzy set A = "Near" and B = "Approaching Slow", compute A

3. Calculate the Cartesian product A × B


Question 6: A robot's gripper system uses fuzzy sets to determine
grip force based on object fragility and weight. The universe of
discourse for:
 Fragility (F): [0, 10] where 0 is extremely fragile and 10 is
extremely robust
 Weight (W): [0, 5] kg
The fuzzy sets are defined as:
 Fragility:
o Very Fragile: μᵥₑ(x) = {1 if x ≤ 2, (4-x)/2 if 2 < x < 4, 0 if x ≥
4}
o Moderately Fragile: μₘₑ(x) = {0 if x ≤ 2, (x-2)/3 if 2 < x < 5,
(8-x)/3 if 5 ≤ x < 8, 0 if x ≥ 8}
o Robust: μᵣ(x) = {0 if x ≤ 6, (x-6)/4 if 6 < x < 10, 1 if x ≥ 10}
 Weight:
o Light: μₗ(x) = {1 if x ≤ 1, (2-x) if 1 < x < 2, 0 if x ≥ 2}
o Medium: μₘ(x) = {0 if x ≤ 1, (x-1) if 1 < x < 2, (3-x) if 2 < x <
3, 0 if x ≥ 3}

CHAITANYA BELEKAR 4
o Heavy: μₕ(x) = {0 if x ≤ 2, (x-2)/3 if 2 < x < 5, 1 if x ≥ 5}
1. For an object with fragility rating of 3.5 and weight of 1.8 kg,
calculate all membership values
2. Define the fuzzy relation R between Fragility and Weight using
the max-min composition
3. Using α-cut method with α = 0.7, determine the crisp sets for all
linguistic variables
Unit 4: Fuzzy Logic Controllers
Question 7: Design a fuzzy logic controller for a robot's path-
following system with two inputs and one output:
 Input 1: Lateral Error (LE) - distance from desired path in cm
 Input 2: Error Rate (ER) - rate of change of lateral error in cm/s
 Output: Steering Angle (SA) - correction angle in degrees
The membership functions are:
 For LE: Negative Large (NL), Negative Small (NS), Zero (Z),
Positive Small (PS), Positive Large (PL)
 For ER: Decreasing Fast (DF), Decreasing Slow (DS), Steady (S),
Increasing Slow (IS), Increasing Fast (IF)
 For SA: Left Large (LL), Left Small (LS), Zero (Z), Right Small (RS),
Right Large (RL)
With a rule base of 25 rules (abbreviated):
 IF LE is NL AND ER is DF THEN SA is RL
 IF LE is Z AND ER is S THEN SA is Z
 IF LE is PL AND ER is IF THEN SA is LL
For sensor readings of LE = 7cm and ER = -2cm/s:

CHAITANYA BELEKAR 5
1. Perform fuzzification
2. Apply the fuzzy inference rules (use max-min composition)
3. Perform defuzzification using the centroid method
4. Explain how the controller would behave in this scenario
Question 8: A robot's fuzzy controller for battery management
system needs to be designed using the Sugeno fuzzy inference
method. The system has two inputs:
 Battery Level (BL): Low, Medium, High
 Power Consumption Rate (PCR): Low, Medium, High
And one output:
 Operating Mode (OM): Emergency (z = 0), Conservative (z =
0.5), Normal (z = 1)
The membership functions for inputs are triangular, and the rule base
is:
 IF BL is Low AND PCR is High THEN OM is Emergency
 IF BL is Medium AND PCR is Medium THEN OM is Conservative
 IF BL is High AND PCR is Low THEN OM is Normal
 (Complete with 6 more rules)
For input values BL = 45% and PCR = 30%:
1. Calculate the firing strength of each rule
2. Apply the Sugeno inference method
3. Calculate the crisp output value
4. Compare the results with what would be obtained using
Mamdani inference
Unit 5: Applications of Neural Network and Fuzzy Logic in Robotics

CHAITANYA BELEKAR 6
Question 9: A robotic arm uses a hybrid neuro-fuzzy system for
inverse kinematics. The arm has 2 degrees of freedom (2 revolute
joints) and needs to position its end-effector at coordinates (x,y).
The neural network component:
 Input layer: 2 neurons (x,y coordinates)
 Hidden layer: 4 neurons with tanh activation
 Output layer: 2 neurons (θ₁, θ₂ joint angles)
The fuzzy component refines the neural network output based on:
 Input 1: Prediction Error (PE): Small, Medium, Large
 Input 2: Distance from Singularity (DS): Close, Medium, Far
 Output: Correction Factor (CF): Negligible, Minor, Significant
For target position (4.2, 3.5) and neural network output θ₁ = 32°, θ₂ =
58°:
1. If the prediction error is 1.8cm and distance from singularity is
0.3: a. Calculate the fuzzy membership values b. Apply fuzzy
rules to determine correction factor c. Calculate the final joint
angles after applying correction
2. How does this hybrid approach improve the performance over
using neural networks alone?
Question 10: A mobile robot uses a CNN-based vision system
combined with a fuzzy logic controller for obstacle avoidance. The
CNN processes 32×32 grayscale images from the robot's camera to
detect obstacles.
The CNN architecture:
 Input: 32×32×1 image
 Conv1: 8 filters of size 5×5, stride 1, ReLU activation

CHAITANYA BELEKAR 7
 MaxPool1: 2×2 pooling, stride 2
 Conv2: 16 filters of size 3×3, stride 1, ReLU activation
 MaxPool2: 2×2 pooling, stride 2
 Fully Connected: 3 outputs (Left obstacle proximity, Front
obstacle proximity, Right obstacle proximity)
The fuzzy controller uses these three outputs as inputs, with outputs:
 Left Wheel Speed (LWS): Very Slow, Slow, Medium, Fast, Very
Fast
 Right Wheel Speed (RWS): Very Slow, Slow, Medium, Fast, Very
Fast
For a given input image resulting in CNN outputs [0.8, 0.2, 0.1]
(representing left, front, right obstacle proximities):
1. Calculate the firing strength of fuzzy rules
2. Determine the appropriate wheel speeds using the centroid
defuzzification method
3. Explain how this combined system handles uncertain sensor
data better than traditional methods

Additional Practice Questions for ANN & FS Exam


Unit 1: Neural Network Fundamentals
Question 11: A drone control system uses a single-layer neural
network to classify environmental conditions based on four sensor
readings: wind speed (x₁), air density (x₂), temperature (x₃), and
humidity (x₄). The drone needs to determine if it's safe to fly.

CHAITANYA BELEKAR 8
Initial weights are w₁ = 0.3, w₂ = 0.1, w₃ = -0.2, w₄ = -0.3, and bias b =
0.2. The activation function is the bipolar step function.
Training data:
 Sample 1: [0.2, 0.7, 0.5, 0.3] (Safe: t = +1)
 Sample 2: [0.8, 0.4, 0.6, 0.8] (Unsafe: t = -1)
 Sample 3: [0.3, 0.6, 0.3, 0.4] (Safe: t = +1)
 Sample 4: [0.7, 0.3, 0.7, 0.7] (Unsafe: t = -1)
Using the Hebb learning rule:
1. Calculate the final weights after training with all samples
2. Test the trained network with new input [0.4, 0.5, 0.4, 0.5]
3. Discuss the limitations of Hebbian learning for this application
Question 12: An industrial robotic sorter uses an adaptive linear
neuron (Adaline) to classify materials on a conveyor belt using three
sensor inputs: density (x₁), metallic content (x₂), and reflectivity (x₃).
Initial weights are w₁ = 0.1, w₂ = 0.1, w₃ = 0.1, and bias b = 0. The
learning rate α = 0.2.
Training data (bipolar format):
 Sample 1: [+1, -1, +1] (Metal: t = +1)
 Sample 2: [-1, -1, +1] (Plastic: t = -1)
 Sample 3: [+1, +1, -1] (Metal: t = +1)
 Sample 4: [-1, +1, -1] (Plastic: t = -1)
1. Calculate the weight changes and update weights after each
sample for one epoch
2. Calculate the Mean Squared Error (MSE) before and after
training

CHAITANYA BELEKAR 9
3. Compare the Adaline performance with a perceptron for this
application
Unit 2: Back Propagation
Question 13: A robot's vision system uses a CNN to identify objects.
The system processes a 6×6 input image with the following
architecture:
 Convolutional layer: One 3×3 filter with stride 1 and zero
padding
 ReLU activation function
 Max pooling layer: 2×2 filter with stride 2
 Fully connected layer: 4 neurons with softmax activation
Given the input image:
[
[0.1, 0.2, 0.0, 0.1, 0.3, 0.1],
[0.0, 0.8, 0.7, 0.6, 0.2, 0.0],
[0.1, 0.7, 0.9, 0.8, 0.3, 0.1],
[0.0, 0.6, 0.8, 0.7, 0.2, 0.0],
[0.1, 0.3, 0.2, 0.3, 0.1, 0.0],
[0.0, 0.1, 0.0, 0.1, 0.0, 0.0]
]
And the filter weights:
[
[0.1, 0.2, -0.1],
[-0.1, 0.3, 0.2],
[0.0, -0.2, 0.1]
CHAITANYA BELEKAR 10
]
1. Compute the output after convolution (show calculations for at
least 4 positions)
2. Apply ReLU activation
3. Perform max pooling
4. If the weights of the fully connected layer are [0.1, 0.2, -0.1,
0.3], [0.2, 0.1, 0.3, -0.2], [0.3, -0.1, 0.2, 0.1], [0.1, 0.3, 0.2, -0.1]
and biases are [0.1, 0.2, 0.1, 0.3], calculate the final output
Question 14: A recurrent neural network (RNN) is used to predict the
next position of a moving object in a robot's field of view. The
network has:
 Input dimension: 2 (x, y coordinates)
 Hidden state dimension: 3
 Output dimension: 2 (predicted next x, y coordinates)
Given the following parameters:
 Weight matrix for input to hidden: Wxh = [[0.1, 0.2], [0.3, -0.1],
[0.2, 0.1]]
 Weight matrix for hidden to hidden: Whh = [[0.4, -0.2, 0.1],
[0.1, 0.5, -0.3], [-0.2, 0.2, 0.3]]
 Weight matrix for hidden to output: Why = [[0.3, 0.2, -0.1], [0.1,
-0.3, 0.2]]
 Bias for hidden: bh = [0.1, 0.2, -0.1]
 Bias for output: by = [0.1, -0.1]
 Activation function: tanh
 Initial hidden state: h₀ = [0, 0, 0]
For a sequence of inputs: x₁ = [0.5, 0.3], x₂ = [0.6, 0.4]:

CHAITANYA BELEKAR 11
1. Calculate hidden state h₁ and output y₁ after processing the first
input x₁
2. Calculate hidden state h₂ and output y₂ after processing the
second input x₂
3. If the true next positions were [0.7, 0.5] and [0.8, 0.6], calculate
the mean squared error
Unit 3: Fuzzy Set and Fuzzy Logic
Question 15: A robot's vision system uses fuzzy logic to classify
objects based on three attributes:
 Size (S): Small, Medium, Large
 Weight (W): Light, Medium, Heavy
 Texture (T): Smooth, Rough, Very Rough
Given the following membership functions for an object:
 Size: μₛₘₐₗₗ(object) = 0.2, μₘₑₐₗᵤₘ(object) = 0.6, μₗₐᵣₑ(object) =
0.3
 Weight: μₗᵢₑₕₜ(object) = 0.1, μₘₑₐₗᵤₘ(object) = 0.7, μₕₑₐᵥₓ(object)
= 0.4
 Texture: μₛₘₒₒₜₕ(object) = 0.3, μᵣₒᵤₑₕ(object) = 0.8, μᵥₑᵣₓ
ᵣₒᵤₑₕ(object) = 0.2

Weight) b. (Small Size) ∪ (Heavy Weight) c. NOT(Rough Texture)


1. Calculate the fuzzy set operations: a. (Medium Size) ∩ (Medium

d. (Large Size) × (Light Weight)


2. If object classification rules are defined using the composition
of these fuzzy relations, calculate the membership value for
classifying this object as a "ball" using max-min composition if:
o R₁(Size, Weight) is given as [[0.2, 0.7, 0.3], [0.8, 0.5, 0.2],
[0.4, 0.3, 0.6]]

CHAITANYA BELEKAR 12
o R₂(Weight, Texture) is given as [[0.6, 0.3, 0.1], [0.2, 0.8,
0.3], [0.1, 0.4, 0.9]]
Question 16: A mobile robot uses fuzzy sets to represent linguistic
terms for navigation. The universe of discourse for:
 Distance (D): [0, 100] meters
 Angle (A): [-90, 90] degrees
 Speed (S): [0, 5] m/s
Define the following fuzzy sets using appropriate membership
functions:
1. For Distance:
o "Near": triangular (0, 20, 40)
o "Medium": triangular (20, 50, 80)
o "Far": triangular (60, 100, 100)
2. For Angle:
o "Hard Left": triangular (-90, -90, -30)
o "Soft Left": triangular (-60, -30, 0)
o "Straight": triangular (-20, 0, 20)
o "Soft Right": triangular (0, 30, 60)
o "Hard Right": triangular (30, 90, 90)
3. For Speed:
o "Slow": triangular (0, 0, 2)
o "Medium": triangular (1, 2.5, 4)
o "Fast": triangular (3, 5, 5)
Given sensor readings: D = 35m, A = -25°, S = 2.2m/s:

CHAITANYA BELEKAR 13
1. Calculate membership values for all linguistic variables
2. Find the α-cuts for α = 0.5 for each fuzzy set with non-zero
membership
3. Calculate the height, support, and core of the "Medium
Distance" fuzzy set
Unit 4: Fuzzy Logic Controllers
Question 17: Design a Sugeno-type fuzzy controller for a drone's
altitude control system with:
 Input 1: Altitude Error (AE) - difference between desired and
actual altitude in meters
 Input 2: Vertical Velocity (VV) - rate of change of altitude in m/s
 Output: Thrust Adjustment (TA) - adjustment to the motor
power
The membership functions for inputs are:
 AE: Negative Large (NL), Negative Small (NS), Zero (Z), Positive
Small (PS), Positive Large (PL)
 VV: Descending Fast (DF), Descending Slow (DS), Stable (S),
Ascending Slow (AS), Ascending Fast (AF)
The output functions for TA are:
 z₁ = -0.8 (Large Decrease)
 z₂ = -0.4 (Small Decrease)
 z₃ = 0 (No Change)
 z₄ = 0.4 (Small Increase)
 z₅ = 0.8 (Large Increase)
Partial rule base:

CHAITANYA BELEKAR 14
 IF AE is NL AND VV is DF THEN TA is z₅
 IF AE is Z AND VV is S THEN TA is z₃
 IF AE is PL AND VV is AF THEN TA is z₁
For input values AE = -2.5m and VV = 0.8m/s:
1. Calculate the membership values for all linguistic variables
2. Determine which rules are fired and their firing strengths
3. Calculate the crisp output using weighted average
defuzzification
4. Explain how this controller helps stabilize the drone's altitude
Question 18: A robot arm's fuzzy controller for grasping objects
needs to be designed. The system has three inputs:
 Object Size (OS): Small, Medium, Large
 Object Fragility (OF): Fragile, Medium, Robust
 Object Weight (OW): Light, Medium, Heavy
And two outputs:
 Gripper Force (GF): Very Gentle, Gentle, Medium, Firm, Very
Firm
 Gripper Speed (GS): Very Slow, Slow, Medium, Fast, Very Fast
Using Mamdani inference:
1. Design appropriate membership functions for all inputs and
outputs
2. Create a rule base with at least 10 rules that would ensure safe
grasping of different object types

CHAITANYA BELEKAR 15
3. For an object with OS = 4.2cm, OF = 7.5 (on scale 1-10), and OW
= 1.8kg: a. Perform fuzzification b. Apply inference rules c.
Perform defuzzification using centroid method
4. Discuss how this fuzzy controller handles uncertainty better
than a traditional controller
Unit 5: Applications of Neural Network and Fuzzy Logic in Robotics
Question 19: A robot manipulator uses a neuro-fuzzy system for
trajectory planning and obstacle avoidance. The system consists of:
1. A neural network that predicts the optimal trajectory based on:
o Input: Start position (x₁, y₁, z₁), goal position (x₂, y₂, z₂), and
obstacle information
o Hidden layers: Two hidden layers with 10 and 6 neurons
respectively
o Output: A series of waypoints representing the trajectory
2. A fuzzy system that refines the trajectory based on:
o Input 1: Proximity to Obstacles (PO)
o Input 2: Energy Efficiency (EE)
o Input 3: Execution Time (ET)
o Output: Trajectory Adjustment (TA)
Given a scenario where:
 Neural network generates a trajectory with 5 waypoints
 Obstacle proximity sensor detects an unexpected obstacle with
PO = 0.8 (high)
 Energy efficiency is calculated as EE = 0.6 (medium)
 Execution time is ET = 0.3 (good/low)

CHAITANYA BELEKAR 16
1. Describe how the fuzzy system would adjust the trajectory
2. Design an appropriate rule base for this scenario (minimum 8
rules)
3. If the trajectory adjustment is calculated as 0.7 (significant
adjustment): a. Explain how the new trajectory would differ
from the original b. Calculate the adjusted waypoints if the
adjustment applies a vector of [0.2, -0.3, 0.1] to each point
4. Discuss the advantages of this hybrid approach compared to
pure neural network or pure fuzzy systems
Question 20: A humanoid robot uses a combination of CNN and RNN
for human gesture recognition, coupled with a fuzzy controller for
response generation. The vision system processes image sequences
through:
1. CNN component:
o Input: 64×64×3 RGB images
o Feature extraction layers: 3 convolutional blocks with [16,
32, 64] filters
o Output: 128-dimensional feature vector
2. RNN component:
o Input: Sequence of feature vectors from CNN
o Architecture: LSTM with 64 hidden units
o Output: Gesture classification (7 classes of gestures)
3. Fuzzy response controller:
o Input 1: Gesture Confidence (GC)
o Input 2: Human Proximity (HP)
o Input 3: Environmental Context (EC)

CHAITANYA BELEKAR 17
o Output: Response Type (RT) and Response Intensity (RI)
For a scenario where:
 The system recognizes a "wave hello" gesture with confidence
0.85
 Human proximity is 2.3 meters
 Environmental context is "informal meeting" with certainty 0.7
1. Design a fuzzy rule base for determining appropriate robot
responses
2. Calculate the fuzzy inference process for this scenario
3. If the defuzzified outputs are RT = 0.8 (verbal + physical
response) and RI = 0.6 (medium intensity): a. Describe the
specific robot response that would be generated b. Explain how
changing the environmental context would affect the response
4. Implement a simplified backpropagation process to show how
the system could learn from human feedback
Question 21: A drone swarm uses distributed neural networks and
fuzzy logic for coordinated search and rescue operations. Each drone
has:
1. Local neural network:
o Input: Local sensor data (camera, thermal, lidar)
o Output: Local decision variables (search priority map)
2. Swarm communication network:
o Architecture: Graph Neural Network (GNN)
o Purpose: Information sharing and coordination
3. Fuzzy decision system:
o Input 1: Target Detection Confidence (TDC)

CHAITANYA BELEKAR 18
o Input 2: Battery Status (BS)
o Input 3: Communication Quality (CQ)
o Output: Role Assignment (RA) and Action Priority (AP)
For a scenario where:
 Drone A detects a potential target with TDC = 0.7
 Drone A's battery status is BS = 0.4 (40% remaining)
 Communication quality is CQ = 0.8 (good)
 Neighboring drones B and C report TDC values of 0.3 and 0.5
respectively
1. Design the fuzzy inference system for role assignment
2. Calculate the GNN-based information fusion process
3. Determine the optimal drone behavior using the fuzzy
controller
4. Discuss how this hybrid neural-fuzzy approach enables robust
decision-making under uncertainty
Question 22: A robotic exoskeleton uses a real-time adaptive neural
network with fuzzy feedback for assisting human movements. The
system includes:
1. EMG sensor processing neural network:
o Input: 8-channel EMG signals from human muscles
o Architecture: 1D CNN followed by LSTM
o Output: Predicted movement intention
2. Fuzzy controller for assistance level:
o Input 1: Movement Confidence (MC)
o Input 2: User Effort Level (UE)

CHAITANYA BELEKAR 19
o Input 3: Movement Phase (MP)
o Output: Assistance Force (AF) and Impedance Level (IL)
3. Adaptive mechanism:
o Reinforcement learning component that adjusts
assistance based on user feedback
For a scenario where:
 EMG signals indicate an intention to stand up with MC = 0.85
 User effort is measured as UE = 0.6 (moderate effort)
 Movement phase is MP = 0.2 (early phase)
1. Design the fuzzy rule base for determining appropriate
assistance
2. Calculate the crisp outputs using Mamdani inference
3. If the system receives negative feedback from the user: a.
Implement an adaptive update to the fuzzy membership
functions b. Recalculate the outputs with the updated system
4. Explain how this neural-fuzzy-reinforcement hybrid approach
provides personalized assistance
Question 23: An autonomous underwater vehicle (AUV) uses a deep
reinforcement learning system with fuzzy reward shaping for complex
navigation tasks. The system consists of:
1. Deep Q-Network (DQN):
o State space: Current position, orientation, velocity, sensor
readings
o Action space: Thrust, rudder, and depth control
adjustments
o Reward: Shaped by fuzzy logic system

CHAITANYA BELEKAR 20
2. Fuzzy reward shaping mechanism:
o Input 1: Goal Progress (GP)
o Input 2: Energy Efficiency (EE)
o Input 3: Safety Margin (SM)
o Output: Reward Multiplier (RM)
For a training scenario where:
 The AUV makes progress toward goal with GP = 0.7
 Energy usage is inefficient with EE = 0.3
 Safety margin from obstacles is SM = 0.5
1. Design appropriate fuzzy membership functions for all variables
2. Create a rule base that would shape the rewards effectively
3. Calculate the fuzzy-shaped reward for this scenario
4. Demonstrate how this approach improves training stability
compared to traditional fixed-reward DQN

More Advanced Practice Questions for ANN & FS Exam


Unit 1: Neural Network Fundamentals
Question 24: A mobile security robot uses a competitive learning
neural network (Self-Organizing Map) to classify environmental
conditions into security threat levels. The network receives input
from four sensors: motion detection (x₁), sound level (x₂), heat
signature (x₃), and light level (x₄).
The SOM has a 3×3 grid of neurons with initial weight vectors:
 w₁ = [0.2, 0.3, 0.1, 0.4]

CHAITANYA BELEKAR 21
 w₂ = [0.5, 0.1, 0.2, 0.3]
 w₃ = [0.3, 0.4, 0.5, 0.1]
 w₄ = [0.1, 0.2, 0.4, 0.3]
 w₅ = [0.4, 0.5, 0.3, 0.2]
 w₆ = [0.2, 0.3, 0.5, 0.4]
 w₇ = [0.5, 0.4, 0.2, 0.1]
 w₈ = [0.3, 0.1, 0.4, 0.5]
 w₉ = [0.1, 0.5, 0.2, 0.3]
For the input vector x = [0.7, 0.2, 0.6, 0.3] and learning rate α = 0.2:
1. Identify the winning neuron (BMU)
2. Update the weights of the BMU
3. If the neighborhood function includes the adjacent neurons
(assuming a grid topology), update their weights as well
4. Discuss how this SOM would develop specialized neurons for
different security threat patterns after extensive training
Question 25: An intelligent prosthetic hand uses a Discrete Hopfield
Network to recognize grasp patterns from noisy EMG signals. The
network stores the following four fundamental grasp patterns
(represented as 8-dimensional bipolar vectors):
 Precision grip: [+1, +1, -1, -1, +1, -1, +1, -1]
 Power grasp: [+1, +1, +1, -1, -1, -1, +1, +1]
 Key grip: [+1, -1, +1, +1, -1, +1, -1, -1]
 Tripod grip: [-1, +1, +1, +1, +1, -1, -1, -1]
1. Calculate the weight matrix using the Hebbian learning rule
without self-connections

CHAITANYA BELEKAR 22
2. Verify that all four patterns are stable
3. If a noisy pattern [+1, +1, -1, +1, +1, -1, +1, -1] is presented to
the network: a. Identify which fundamental pattern it most
closely resembles b. Demonstrate the network dynamics using
synchronous updating for 3 iterations c. Determine if the
network converges to a stored pattern or to a spurious state
4. Calculate the storage capacity of this network and discuss its
limitations for prosthetic control
Unit 2: Back Propagation
Question 26: A deep learning system for robot force control uses a
custom neural network architecture with the following components:
 Input layer: 6 neurons (end-effector position, orientation)
 First hidden layer: 8 neurons with ReLU activation
 Second hidden layer: 12 neurons with tanh activation
 Attention mechanism layer: 4-head self-attention
 Output layer: A robot joint torques (6 neurons)
Given the following:
 Training sample x = [0.2, 0.3, 0.5, 0.1, -0.2, 0.4], target y = [0.3, -
0.2, 0.5, 0.1, -0.3, 0.2]
 Loss function: Mean Squared Error
 Learning rate: η = 0.01
 Regularization: L2 with λ = 0.001
1. Describe the forward pass through this network architecture,
including the self-attention mechanism
2. Calculate the gradient of the loss with respect to the weights of
the second hidden layer

CHAITANYA BELEKAR 23
3. Implement weight updates using Adam optimization for one
iteration
4. Explain how this architecture handles the spatial relationships
between inputs better than a standard feedforward network
Question 27: A robot employs a recurrent neural network (LSTM) for
time-series prediction of joint torques. The architecture is:
 Input dimension: 6 (current joint positions and velocities)
 LSTM cell size: 8
 Output dimension: 6 (predicted joint torques)
Given the following parameters:
 Forget gate weights: Wf = [[...6×8 matrix...]] (assume
appropriate values)
 Input gate weights: Wi = [[...6×8 matrix...]] (assume appropriate
values)
 Cell state weights: Wc = [[...6×8 matrix...]] (assume appropriate
values)
 Output gate weights: Wo = [[...6×8 matrix...]] (assume
appropriate values)
 Output weights: Wy = [[...8×6 matrix...]] (assume appropriate
values)
 All biases are 0 for simplicity
For an input sequence x₁ = [0.1, 0.2, 0.3, -0.1, -0.2, -0.3], x₂ = [0.15,
0.25, 0.35, -0.15, -0.25, -0.35]:
1. Calculate the LSTM cell outputs for the sequence (show
detailed calculations for all gates)
2. Apply backpropagation through time (BPTT) for one step

CHAITANYA BELEKAR 24
3. If the predicted torques deviate from actual values by 15%,
discuss potential improvements to the architecture
Unit 3: Fuzzy Set and Fuzzy Logic
Question 28: A robot perception system uses type-2 fuzzy sets to
handle uncertainty in sensor readings. The system processes data
from three sensors:
 LIDAR distance measurement (L)
 Infrared proximity sensor (I)
 Ultrasonic range finder (U)
Each sensor reading has both primary and secondary membership
functions to account for measurement uncertainty.
For LIDAR distance, the type-2 fuzzy set "Close" is defined as:
 Primary membership function: triangular (0, 0, 2)
 Secondary membership functions:
o Lower bound: triangular (0, 0, 1.8)
o Upper bound: triangular (0, 0, 2.2)
For a LIDAR reading of 1.5 meters:
1. Calculate the primary membership value
2. Calculate the footprint of uncertainty (FOU)
3. If similar type-2 fuzzy sets are defined for the other sensors,
compute the type-2 fuzzy intersection of "Close" readings from
all three sensors
4. Perform type reduction to obtain a type-1 fuzzy set
5. Explain how type-2 fuzzy logic handles sensor uncertainty
better than type-1

CHAITANYA BELEKAR 25
Question 29: A robot learning system employs fuzzy cognitive maps
(FCM) to model causal relationships between variables in a
navigation task. The FCM has the following concepts:
 C₁: Obstacle density
 C₂: Path complexity
 C₃: Battery level
 C₄: Movement speed
 C₅: Risk level
 C₆: Navigation performance
The weight matrix representing causal relationships is:
W=[
[0.0, 0.8, 0.0, -0.6, 0.7, -0.5],
[0.0, 0.0, 0.0, -0.7, 0.6, -0.6],
[0.0, 0.0, 0.0, 0.5, -0.4, 0.3],
[0.0, 0.0, -0.3, 0.0, -0.2, 0.7],
[0.0, 0.0, 0.0, -0.8, 0.0, -0.9],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
]
Initial concept values: A₀ = [0.7, 0.5, 0.8, 0.6, 0.3, 0.0]
1. Use the FCM inference formula Aᵗ⁺¹ = f(Aᵗ · W) with the sigmoid
function f(x) = 1/(1+e⁻ᵏˣ) where k = 2
2. Calculate the concept values after 3 iterations
3. Analyze the steady-state behavior of the system
4. Explain how this FCM models the complex relationships
between environmental factors and navigation performance
CHAITANYA BELEKAR 26
Unit 4: Fuzzy Logic Controllers
Question 30: Design an adaptive neuro-fuzzy inference system
(ANFIS) for a robot's energy management system. The system has
two inputs:
 Task Complexity (TC): Low, Medium, High
 Available Energy (AE): Critical, Low, Sufficient, High
And one output:
 Power Mode (PM): Ultra-saving, Energy-saving, Balanced,
Performance, High-performance
The initial rule base consists of 12 rules (example):
 IF TC is Low AND AE is Critical THEN PM is Ultra-saving
 IF TC is High AND AE is High THEN PM is High-performance
Given training data of 50 input-output pairs from optimal human
operators:
1. Design the ANFIS architecture with appropriate layers
2. Explain the learning procedure for tuning membership function
parameters
3. If after training, the membership function for "Medium" Task
Complexity changes from triangular (3, 5, 7) to triangular (2.5,
4.5, 6.5), analyze the impact on system behavior
4. Compare the performance of ANFIS with a pure fuzzy system
and a pure neural network for this application
Question 31: A multi-robot coordination system uses a hierarchical
fuzzy controller for task allocation. The system has:
Top-level controller:
 Input 1: Task Priority (TP): Low, Medium, High, Critical

CHAITANYA BELEKAR 27
 Input 2: System Load (SL): Light, Medium, Heavy, Extreme
 Output: Resource Allocation (RA): Minimal, Conservative,
Standard, Enhanced, Maximum
Lower-level controller for each robot:
 Input 1: Robot Capability (RC): Basic, Standard, Advanced
 Input 2: Resource Allocation (RA): from top-level
 Input 3: Current Battery (CB): Critical, Low, Medium, High
 Output 1: Task Acceptance (TA): Reject, Conditional, Accept
 Output 2: Execution Mode (EM): Power-saving, Standard,
Performance
For a critical task with system load "Heavy", robot capability
"Advanced", and battery level "Medium":
1. Calculate the cascaded fuzzy inference process through both
controllers
2. Design an appropriate rule base for both levels (at least 8 rules
each)
3. Implement rule reduction techniques to optimize
computational efficiency
4. Compare this hierarchical approach with a single large fuzzy
controller
Unit 5: Applications of Neural Network and Fuzzy Logic in Robotics
Question 32: A robotic system for surgical assistance uses a deep
reinforcement learning approach with fuzzy state representation. The
system includes:
1. State representation:

CHAITANYA BELEKAR 28
o End-effector position relative to target (fuzzy sets: Very
Far, Far, Medium, Close, Very Close)
o End-effector orientation error (fuzzy sets: Large Error,
Medium Error, Small Error, Aligned)
o Tissue deformation (fuzzy sets: None, Slight, Moderate,
Significant)
2. Deep Q-Network:
o Input: Fuzzified state (12 membership values)
o Hidden layers: 3 layers with [64, 32, 16] neurons
o Output: Q-values for 9 possible actions (combinations of
movement directions)
3. Reward function:
o Shaped by a fuzzy logic system based on movement
precision and safety
For a given state with membership values:
 Position: [0.0, 0.3, 0.7, 0.0, 0.0]
 Orientation: [0.0, 0.2, 0.8, 0.0]
 Deformation: [0.6, 0.4, 0.0, 0.0]
1. Calculate the Q-values using the neural network (assume
appropriate weights)
2. Determine the optimal action using ε-greedy policy with ε = 0.1
3. If the action results in a new state with increased precision but
slightly more tissue deformation, calculate the fuzzy reward
4. Implement experience replay and target network updates for
stable learning

CHAITANYA BELEKAR 29
Question 33: A multi-modal robotic perception system integrates
visual and tactile sensing using a neural-fuzzy architecture. The
system consists of:
1. Vision module:
o CNN for object detection and feature extraction
o Output: Object class probabilities and feature vector
2. Tactile module:
o LSTM network processing temporal tactile sensor data
o Output: Surface texture features and compliance
estimation
3. Fusion module:
o Fuzzy inference system combining visual and tactile
information
o Rules addressing sensor reliability and complementary
information
For an object with:
 Visual classification: "Metal Cup" (0.7), "Plastic Cup" (0.3)
 Visual features: Cylindrical (0.9), Reflective (0.8)
 Tactile features: Smooth (0.9), Hard (0.7), Cold (0.8)
1. Design a fuzzy rule base for multimodal perception fusion
2. Calculate the confidence in object material classification
3. If vision is partially occluded (reliability = 0.5) while tactile
sensing is reliable (reliability = 0.9), recalculate the fusion
results
4. Explain how this approach handles conflicting sensory
information

CHAITANYA BELEKAR 30
Question 34: A legged robot uses a neuro-evolutionary fuzzy system
for adaptive gait control on varying terrain. The system includes:
1. Terrain classifier:
o CNN processing depth camera input
o Output: Terrain type probabilities (Flat, Gravel, Sand, Mud,
Rocks)
2. Gait controller:
o Evolutionary neural network with encoded gait parameters
o Fuzzy supervisor for online adaptation
3. Adaptation mechanism:
o Population of gait parameters evolved through genetic
algorithm
o Fitness evaluation through fuzzy rules considering stability,
energy efficiency, and speed
For a terrain classified as:
 Flat (0.1), Gravel (0.7), Sand (0.2), Mud (0.0), Rocks (0.0)
1. Design the fuzzy rule base for gait adaptation
2. Implement the genetic algorithm to evolve gait parameters
(mutation, crossover, selection)
3. Calculate the fitness of a candidate solution using the fuzzy
evaluation system
4. Explain how this approach enables real-time adaptation to
changing terrain conditions
Question 35: A robot swarm uses distributed deep learning with
fuzzy communication protocols for collective mapping of an unknown
environment. The system includes:

CHAITANYA BELEKAR 31
1. Individual perception:
o CNN-LSTM architecture for processing sensor data
o Output: Local occupancy grid and uncertainty measures
2. Swarm communication:
o Fuzzy protocol determining when and what to
communicate
o Based on information novelty, confidence, and energy
constraints
3. Map fusion:
o Graph Neural Network aggregating distributed
observations
o Belief update using fuzzy logic to handle conflicting
information
For a scenario with 5 robots exploring an environment:
1. Robot A detects a corridor with 0.8 confidence
2. Robot B detects a wall in the same area with 0.7 confidence
3. Other robots have no information about this area
4. Design the fuzzy communication protocol
5. Calculate when and what the robots should communicate
6. Implement the map fusion process using the GNN and fuzzy
belief update
7. Analyze the energy-accuracy tradeoff of this distributed
approach
Question 36: A humanoid robot uses a quantum-inspired neural
fuzzy system for learning complex manipulation tasks through
demonstration. The system consists of:

CHAITANYA BELEKAR 32
1. Motion capture:
o Vision system tracking human demonstrator
o Output: Joint angles and end-effector trajectories
2. Skill representation:
o Quantum-inspired neural network encoding probabilistic
motion primitives
o Fuzzy temporal segmentation of demonstrations
3. Execution controller:
o Fuzzy inference system adapting learned skills to current
context
o Online adaptation using sensor feedback
For a task demonstration of pouring liquid from a container:
1. Design the fuzzy temporal segmentation system
2. Implement the quantum-inspired neural representation of the
skill
3. Calculate the adaptation required when executing with a
different container
4. Explain how this approach handles uncertainty in both learning
and execution phases
Question 37: A robot exploration system uses reinforcement learning
with intrinsic motivation guided by fuzzy curiosity measures. The
system includes:
1. External reward:
o Sparse rewards for discovering new areas or resources
2. Intrinsic motivation:

CHAITANYA BELEKAR 33
o Fuzzy measures of novelty, surprise, and learning progress
o Combined into a curiosity-driven exploration bonus
3. Policy learning:
o Actor-Critic architecture with separate networks for
external and intrinsic value estimation
For a robot exploring an unknown environment:
1. Design membership functions for novelty, surprise, and learning
progress
2. Calculate the intrinsic reward for three scenarios: a. Discovering
a completely new type of terrain b. Encountering a slightly
different version of a known object c. Revisiting a known area
with different lighting conditions
3. Implement the policy update mechanism combining external
and intrinsic rewards
4. Discuss how this approach balances exploration and
exploitation better than standard RL methods

Additional Practice Questions for ANN & FS Exam (Covering


Remaining Topics)
Unit 1: Neural Network Fundamentals - Convergence Theorem &
Network Topologies
Question 38: A warehouse management robot uses a competitive
learning network to organize inventory into clusters based on
product attributes. The network architecture follows Kohonen's Self-
Organizing Map (SOM) with a 4×4 grid of neurons.

CHAITANYA BELEKAR 34
Each product is represented by a 5-dimensional feature vector: [size,
weight, fragility, demand frequency, storage temperature].
Given:
 Initial neighborhood radius = 2
 Learning rate α(0) = 0.5
 Learning rate decay function: α(t) = α(0) * exp(-t/100)
 Neighborhood function: Gaussian h(d) = exp(-d²/2σ²)
1. Explain how the SOM would organize products with similar
attributes into nearby locations on the grid
2. Derive the weight update rule for a neuron at position (i,j) when
a training vector x is presented
3. Analyze the convergence properties of this SOM as t → ∞
4. Design a visualization technique to help warehouse operators
understand the resulting product organization
Question 39: A bipedal walking robot uses a cascade-correlation
neural network architecture that dynamically adds hidden neurons
during training. The initial network has:
 6 input neurons (joint angles and angular velocities)
 No hidden neurons initially
 2 output neurons (predicted stability metrics)
1. Explain the cascade-correlation learning algorithm and how it
differs from backpropagation
2. Derive the weight update equations for the new candidate
hidden neuron
3. Using the convergence theorem, analyze under what conditions
this network will successfully learn stable walking patterns

CHAITANYA BELEKAR 35
4. If the network adds 5 hidden neurons during training, draw the
final network topology and explain the function of each hidden
neuron
Unit 2: Fully Recurrent Networks & Special Activation Functions
Question 40: A fully recurrent neural network (RNN) is implemented
for a robot's dynamic state estimation. The network has:
 4 input neurons (sensor readings)
 A fully connected recurrent layer with 6 neurons
 3 output neurons (estimated state variables)
All neurons use the hyperbolic tangent activation function: f(x) =
tanh(x).
Given:
 Weight matrix for recurrent connections: Wrec (6×6 matrix)
 Weight matrix for input connections: Win (4×6 matrix)
 Weight matrix for output connections: Wout (6×3 matrix)
 Initial state of recurrent neurons: h₀ = [0.1, -0.2, 0.3, -0.1, 0.2, -
0.3]
 Input sequence: x₁ = [0.5, -0.3, 0.2, 0.1], x₂ = [0.6, -0.2, 0.3, 0.0]
1. Write the mathematical equations for forward propagation
through this network
2. Implement the recurrent backpropagation through time (BPTT)
algorithm
3. Analyze the vanishing and exploding gradient problems in this
fully recurrent architecture
4. Compare this fully recurrent architecture with LSTM networks
for robotic state estimation

CHAITANYA BELEKAR 36
Question 41: A robotic control system uses a specialized neural
network with various activation functions to handle different aspects
of motor control. The network has:
 Input layer: 8 neurons (current state and desired state)
 First hidden layer: 10 neurons with Leaky ReLU activation
 Second hidden layer: 6 neurons with Swish activation (f(x) = x *
sigmoid(x))
 Output layer: 4 neurons with different activations:
o Neurons 1-2: tanh activation (normalized torque values)
o Neuron 3: sigmoid activation (gripper control)
o Neuron 4: linear activation (speed control)
1. Derive the gradients for each activation function for
backpropagation
2. Explain why different activation functions are suitable for
different control outputs
3. For a control precision requirement of ±0.01, analyze the
impact of activation function choice on control accuracy
4. Design a custom activation function that would optimize the
trade-off between control precision and smoothness
Unit 3: Fuzzy Relations & Linguistic Variables
Question 42: A robot navigation system uses fuzzy linguistic variables
to reason about environmental conditions. The system has the
following linguistic variables:
1. Visibility (V): {Very Poor, Poor, Moderate, Good, Excellent}
2. Terrain Complexity (T): {Simple, Moderate, Complex, Very
Complex}

CHAITANYA BELEKAR 37
3. Navigation Risk (R): {Low, Medium, High, Very High}
Each linguistic term is associated with a fuzzy set on the appropriate
universe of discourse.
A fuzzy relation R(V,T) represents the relationship between visibility
and terrain complexity:
R(V,T) = [
[0.9, 0.6, 0.2, 0.0], // Very Poor
[0.7, 0.8, 0.4, 0.1], // Poor
[0.3, 0.9, 0.7, 0.3], // Moderate
[0.1, 0.5, 0.8, 0.6], // Good
[0.0, 0.2, 0.5, 0.9] // Excellent
]
Another fuzzy relation S(T,R) represents the relationship between
terrain complexity and navigation risk:
S(T,R) = [
[0.8, 0.3, 0.0, 0.0], // Simple
[0.5, 0.7, 0.4, 0.1], // Moderate
[0.2, 0.5, 0.8, 0.6], // Complex
[0.0, 0.2, 0.7, 0.9] // Very Complex
]
1. Calculate the composition of these relations R∘S to directly
relate Visibility to Navigation Risk
2. Implement cylindrical extension and projection operations on
these relations

CHAITANYA BELEKAR 38
3. If the current visibility is assessed as "μVery Poor(x) = 0.2,
μPoor(x) = 0.7, μModerate(x) = 0.1", determine the resulting
navigation risk using the max-min composition
4. Explain how linguistic hedges like "very," "somewhat," and
"extremely" can be implemented and affect the system's
reasoning
Question 43: A robotic arm uses fuzzy reasoning with linguistic
variables to handle objects of different fragility levels. The system
includes:
1. Linguistic Variable "Object Fragility" (F):
o Terms: {Very Sturdy, Sturdy, Moderate, Fragile, Very
Fragile}
o Universe of discourse: [0, 10]
2. Linguistic Variable "Grip Force" (G):
o Terms: {Very Light, Light, Medium, Firm, Very Firm}
o Universe of discourse: [0, 100] Newtons
3. Linguistic Variable "Grip Speed" (S):
o Terms: {Very Slow, Slow, Medium, Fast, Very Fast}
o Universe of discourse: [0, 50] mm/s
The membership functions are triangular, and linguistic hedges are
defined as:
 "Very X": μvery X(x) = [μX(x)]²
 "Somewhat X": μsomewhat X(x) = √μX(x)
 "Extremely X": μextremely X(x) = [μX(x)]⁴
1. Design appropriate membership functions for all linguistic
terms

CHAITANYA BELEKAR 39
2. Create a fuzzy relation between Fragility and Grip Force using
expert knowledge
3. Apply the linguistic hedge "very" to the term "Fragile" and show
how it affects the membership function
4. For a complex rule like "IF Object is Very Fragile THEN Grip
Force is Very Light AND Grip Speed is Extremely Slow",
implement the fuzzy inference process
Unit 4: Theory of Approximate Reasoning & Defuzzification
Methods
Question 44: A drone's autonomous landing system uses Takagi-
Sugeno-Kang (TSK) fuzzy inference with approximate reasoning to
adjust descent parameters. The system has three inputs:
1. Height (H): {Very Low, Low, Medium, High, Very High}
2. Wind Speed (W): {Calm, Moderate, Strong}
3. Battery Status (B): {Critical, Low, Normal, Full}
The TSK fuzzy rule base includes:
 R₁: IF H is Very Low AND W is Calm AND B is Normal THEN
Thrust = 0.2H - 0.05W + 0.1B + 2.0
 R₂: IF H is Medium AND W is Strong AND B is Low THEN Thrust =
0.3H + 0.2W - 0.1B + 1.5
 (Additional rules...)
1. Explain the theory of approximate reasoning in the context of
this TSK system
2. Calculate the degree of fulfillment for each rule given sensor
readings: H = 2.5m, W = 8km/h, B = 30%
3. Implement the weighted average defuzzification to calculate
the final thrust output

CHAITANYA BELEKAR 40
4. Compare the TSK approach with a Mamdani system for this
application, analyzing approximation errors
Question 45: A smart manufacturing robot uses an advanced
defuzzification system for quality control decisions. The system
evaluates produced parts based on three criteria:
1. Dimensional Accuracy (A): {Poor, Acceptable, Excellent}
2. Surface Finish (S): {Rough, Medium, Smooth, Mirror}
3. Material Consistency (M): {Inconsistent, Relatively Consistent,
Consistent}
These are combined to determine Quality Rating (Q): {Reject,
Rework, Accept, Premium}
After fuzzy inference, the system produces a fuzzy set for Quality
Rating with the following membership values:
 μReject(Q) = 0.2
 μRework(Q) = 0.6
 μAccept(Q) = 0.3
 μPremium(Q) = 0.1
The universe of discourse for Quality Rating is [0, 100], with:
 Reject: [0, 40]
 Rework: [20, 60]
 Accept: [50, 90]
 Premium: [80, 100]
1. Calculate the defuzzified output using each of the following
methods: a. Centroid method b. Mean of maxima method c.
Center of sums method d. Height method

CHAITANYA BELEKAR 41
2. Analyze which defuzzification method is most appropriate for
this quality control application
3. Implement alpha-cut decomposition and explain how it relates
to approximate reasoning
4. Design a rationale property evaluation to ensure consistency in
the decision-making process
Unit 5: Applications in Robot Vision & Path Planning
Question 46: An autonomous robot uses a hybrid CNN-Fuzzy system
for vision-based navigation through cluttered environments. The
system architecture consists of:
1. Vision Module (CNN-based):
o Input: RGB-D camera images (640×480 pixels)
o Feature extraction: ResNet-18 backbone, modified for
depth information
o Output: Object detection with classes {Human, Static
Obstacle, Dynamic Obstacle, Path, Goal}
o Confidence scores and bounding boxes for all detections
2. Fuzzy Interpretation Module:
o Input: CNN detection results (classes, confidences,
locations)
o Fuzzy sets for spatial relationships: {Very Close, Close,
Moderate, Far, Very Far}
o Fuzzy sets for motion characteristics: {Stationary, Slow,
Medium, Fast}
o Output: Safety level and navigation priority for detected
objects

CHAITANYA BELEKAR 42
3. Path Planning Module:
o Integration of fuzzy safety assessments into A* algorithm
o Dynamic cost map generation based on fuzzy reasoning
For a scenario where the robot detects:
 A human 3 meters ahead, moving at 1.2 m/s (confidence: 0.92)
 A static obstacle 1.5 meters to the right (confidence: 0.88)
 A potential path 2 meters to the left (confidence: 0.75)
1. Design the CNN feature extraction process for this application
2. Implement the fuzzy interpretation of spatial relationships and
motion characteristics
3. Calculate the safety level and navigation priority for each
detected entity
4. Explain how the fuzzy-enhanced path planning improves
navigation in uncertain environments compared to classical
methods
Question 47: A multi-robot system performs collaborative mapping
and exploration of an unknown environment using a distributed
neural-fuzzy architecture. The system includes:
1. Individual Robot SLAM:
o Neural network-based feature extraction from LIDAR and
camera data
o Particle filter for localization with fuzzy confidence
assessment
o Local occupancy grid map generation
2. Map Fusion System:

CHAITANYA BELEKAR 43
o Fuzzy logic framework for conflict resolution in
overlapping map regions
o Confidence-based weighted integration of individual maps
o Linguistic variables for map quality: {Uncertain, Somewhat
Certain, Certain, Very Certain}
3. Exploration Strategy:
o Information gain assessment using neural networks
o Fuzzy decision making for robot task allocation
o Dynamic role assignment based on robot capabilities and
current map coverage
For a team of 3 robots with partially overlapping maps:
1. Design the neural network architecture for feature extraction
and information gain assessment
2. Implement the fuzzy conflict resolution system for map fusion
3. Calculate the collaborative exploration decisions for optimal
coverage
4. Analyze how this neural-fuzzy approach handles mapping
uncertainty compared to purely probabilistic methods
Question 48: A robotic manipulation system uses a vision-guided
inverse kinematics approach with neural networks and fuzzy control.
The system architecture includes:
1. Vision System:
o Stereo cameras for 3D object localization
o CNN for object recognition and pose estimation
o Output: Target object position and orientation with
uncertainty estimates

CHAITANYA BELEKAR 44
2. Inverse Kinematics Solver:
o Neural network trained to map end-effector positions to
joint configurations
o Handling of redundant degrees of freedom
o Multiple solutions representation
3. Fuzzy Trajectory Planning:
o Linguistic variables for motion constraints: {Very
Restricted, Restricted, Moderately Free, Free, Very Free}
o Fuzzy rule base for selecting optimal trajectory from
multiple IK solutions
o Adaptation to environmental constraints
For a 7-DOF robotic arm attempting to grasp a fragile object in a
cluttered environment:
1. Design the CNN architecture for object pose estimation with
uncertainty quantification
2. Implement the neural network inverse kinematics solver with
multiple solution generation
3. Create a fuzzy rule base for selecting the optimal trajectory
considering:
o Joint limits
o Obstacle proximity
o Energy efficiency
o Mechanical advantage at grasp point
4. Calculate the complete manipulation sequence from visual
detection to successful grasp

CHAITANYA BELEKAR 45
Question 49: A legged robot uses a neural-fuzzy system for dynamic
gait adaptation on challenging terrain. The system includes:
1. Terrain Classification:
o CNN processing visual and proprioceptive data
o Output: Terrain type probabilities and characteristic
parameters
2. Gait Generation:
o Central Pattern Generator (CPG) neural network
o Phase coordination between limbs
o Amplitude and frequency modulation
3. Fuzzy Gait Adaptation:
o Input: Terrain parameters, robot state, and performance
metrics
o Output: CPG parameter adjustments
o Rule base capturing expert knowledge on animal
locomotion
For a quadruped robot transitioning from firm ground to muddy
terrain:
1. Design the CNN-based terrain classifier architecture
2. Implement the CPG neural network for gait generation
3. Create a fuzzy adaptation system that modifies:
o Step height
o Step length
o Foot placement strategy
o Body posture

CHAITANYA BELEKAR 46
o Leg stiffness
4. Calculate the gait transition process showing parameter
adjustments over time
Question 50: A robot hand uses a deep reinforcement learning
system with fuzzy reward shaping for dexterous manipulation tasks.
The system includes:
1. State Representation:
o Joint angles and velocities
o Tactile sensor readings
o Object pose relative to hand
2. Policy Network:
o Actor-critic architecture
o Input: State representation
o Output: Joint torques
3. Fuzzy Reward System:
o Input 1: Task progress (linguistic terms: No Progress, Slight
Progress, Good Progress, Task Completed)
o Input 2: Manipulation stability (linguistic terms: Unstable,
Marginally Stable, Stable, Very Stable)
o Input 3: Energy efficiency (linguistic terms: Inefficient,
Moderately Efficient, Efficient)
o Output: Reward multiplier
For a task of rotating a small object between fingers:
1. Design the state representation and policy network architecture

CHAITANYA BELEKAR 47
2. Implement the fuzzy reward system with appropriate
membership functions
3. Calculate the shaped rewards for three different manipulation
scenarios
4. Analyze how the fuzzy reward shaping improves learning
efficiency compared to sparse or hand-crafted rewards
Question 51: A humanoid robot uses a neuro-fuzzy system for
emotion recognition and appropriate social response generation. The
system includes:
1. Perception Module:
o CNN for facial expression analysis
o RNN for speech prosody analysis
o Feature fusion network
2. Emotion Recognition:
o Classification of basic emotions (Happiness, Sadness,
Anger, Fear, Surprise, Disgust)
o Intensity estimation for each emotion
o Contextual interpretation
3. Fuzzy Response Generation:
o Rule base mapping recognized emotions to appropriate
responses
o Linguistic variables for response characteristics (Verbal
Content, Facial Expression, Gesture, Proxemics)
o Adaptation to social context
For an interaction scenario where the robot detects:
 Facial expression: 70% happiness, 30% surprise

CHAITANYA BELEKAR 48
 Speech prosody: 60% excitement, 40% neutral
1. Design the neural network architecture for multimodal emotion
recognition
2. Implement the fuzzy inference system for response generation
3. Calculate the appropriate social response parameters
4. Discuss how this hybrid approach enables more natural human-
robot interaction compared to rule-based systems

***

CHAITANYA BELEKAR 49

You might also like