Unit-1
1. Define an Artificial Neural Network(JAN/2021)
A computational model composed of interconnected nodes, organized into
layers, designed for pattern recognition and data processing tasks.
For instance, it can recognize handwriting in digit recognition tasks or analyze text
sentiment in natural language processing
2. List the basic components of Artificial Neural Network(JAN/2021)
3. Weights: Connection strengths.
4. • Activation Functions: Compute neuron outputs.
5. • Bias: Shift activation functions.
6. • Input/Output Data: Information flow.
7. • Loss Function: Measures prediction errors.
8. • Backpropagation: Adjusts weights.
9. • Training Data: Labeled dataset.
10. • Optimization Algorithm: Minimizes loss.
11. Weights: Connection strengths.
12. • Activation Functions: Compute neuron outputs.
13. • Bias: Shift activation functions.
14. • Input/Output Data: Information flow.
15. • Loss Function: Measures prediction errors.
16. • Backpropagation: Adjusts weights.
17. • Training Data: Labeled dataset.
18. • Optimization Algorithm: Minimizes loss.
19. Weights: Connection strengths.
20. • Activation Functions: Compute neuron outputs.
21. • Bias: Shift activation functions.
22. • Input/Output Data: Information flow.
23. • Loss Function: Measures prediction errors.
24. • Backpropagation: Adjusts weights.
25. • Training Data: Labeled dataset.
26. • Optimization Algorithm: Minimizes loss.
27. Weights: Connection strengths.
28. • Activation Functions: Compute neuron outputs.
29. • Bias: Shift activation functions.
30. • Input/Output Data: Information flow.
31. • Loss Function: Measures prediction errors.
32. • Backpropagation: Adjusts weights.
33. • Training Data: Labeled dataset.
34. • Optimization Algorithm: Minimizes loss.
35. Weights: Connection strengths.
36. • Activation Functions: Compute neuron outputs.
37. • Bias: Shift activation functions.
38. • Input/Output Data: Information flow.
39. • Loss Function: Measures prediction errors.
40. • Backpropagation: Adjusts weights.
41. • Training Data: Labeled dataset.
42. • Optimization Algorithm: Minimizes loss.
43. Weights: Connection strengths.
44. • Activation Functions: Compute neuron outputs.
45. • Bias: Shift activation functions.
46. • Input/Output Data: Information flow.
47. • Loss Function: Measures prediction errors.
48. • Backpropagation: Adjusts weights.
49. • Training Data: Labeled dataset.
50. • Optimization Algorithm: Minimizes loss.
• Neurons (Nodes): Processing units.
• Layers: Organized neurons.
Weights: Connection strengths.
• Activation Functions: Compute neuron outputs.
• Bias: Shift activation functions.
• Input/Output Data: Information flow.
• Loss Function: Measures prediction errors.
• Backpropagation: Adjusts weights.
• Training Data: Labeled dataset.
• Optimization Algorithm: Minimizes loss.
3. Give an example of a commonly used activation function. (DEC/2022)
(JAN/2022)
ReLU (Rectified Linear Unit): ReLU is a popular activation function used
in neural networks. It computes the output as the maximum of zero and the input
value, which means that if the input is positive, it remains unchanged, but if it's
negative, it becomes zero. This simple yet effective function is widely used in
hidden layers of deep neural networks for tasks like image classification and
natural language processing
4. Outline the steps involved in training an Artificial Neural Network (DEC/2022)
• Data Preparation
• Initialize Weights
• Forward Propagation
• Loss Calculation
• Backpropagation
• Gradient Descent
5. Define the term activation function. (DEC/2022)
An activation function is a mathematical function used in artificial neural
networks to introduce nonlinearity into the model. It determines whether a neuron
should fire (activate) based on its weighted inputs and biases.
• Tanh
• ReLU
• Sigmoid
• softmax
6. Describe the role of the bias in an Artificial Neural Network (JAN/2021)
Bias in an artificial neural network is an additional parameter added to each
neuron. It helps shift the activation function's output, allowing the network to
model more complex relationships by introducing an offset. Ex:y=mx+b,where b
is the bias
What is back propagation? (JAN/2022)
Back propagation is an iterative optimization algorithm used in training artificial
neural networks. It involves calculating the gradient of the loss function with
respect to the network's weights and biases, allowing for weight adjustments to
minimize the error.
Define supervised learning (DEC/2022)
Supervised learning is a machine learning paradigm where the algorithm
learns
from labeled training data to make predictions or classifications on unseen
data. Example: classification, prediction.
PART-B
1. Explain the architecture and training process of a Multilayer Perceptron
(MLP) in detail, and provide a practical example where an MLP is well-
suited for a specific task. (JAN/2022)
2. Elaborate on the architecture and training mechanisms of a Self-Organizing
Map (SOM), and provide an example illustrating how SOMs can be applied
in unsupervised learning tasks.
3. Explain the process of training a convolutional neural network (CNN) for
image classification.
4. Detail the architecture, including convolutional layers, pooling layers, and
fully connected layer
UNIT-2
1. What is Learning Vector Quantization (LVQ)?
LVQ is a supervised learning algorithm where prototype vectors are iteratively
adjusted to better represent input data. It classifies input patterns based on the
similarity between prototypes and input vectors.
2. Explain the principle behind Bidirectional Associative Memory (BAM).
BAM is a neural network architecture that reinforces connections between neurons
activated in pairs. It enables bidirectional pattern retrieval, allowing the network to
recognize patterns in both forward and reverse associations
3. Define Kohonen Self-Organizing Feature Maps.
Kohonen Maps are unsupervised neural networks that map high-dimensional input
data onto a lower-dimensional grid while preserving the topological relationships
between data points. They're useful for tasks like clustering and dimensionality
reduction
4. Define Auto associative Memory Network. (JAN/2021)
An auto associative Memory Network is a type of neural network that can
reconstruct complete patterns from partial or noisy inputs, effectively filling in
missing information to retrieve the original pattern.
5. Describe the functioning of Hopfield Networks.
Hopfield Networks are recurrent neural networks where each neuron is connected
to
every other neuron. They utilize iterative update rules to store and retrieve patterns,
allowing for content-addressable memory and associative recall.
6. Define Heteroassociative Memory Network. (April/2021)
Heteroassociative Memory Network associates one pattern with another, enabling
the retrieval of related patterns based on the similarity of their associations. This is
distinct from auto associative networks which retrieve similar patterns from partial
inputs.
7. Define Adaptive Resonance Theory Network (ART).
Adaptive Resonance Theory Network (ART) Definition: ART networks
dynamically adjust their response thresholds and synaptic weights, making them
excellent for tasks where patterns may vary or evolve over time, such as adaptive
learning systems.
UNIT II/ PART- B
Evaluate the adaptability and learning capabilities of Adaptive Resonance Theory
Network (ART) in dynamic environments. (JAN/2022
Compare Learning Vector Quantization (LVQ) with other unsupervised learning
algorithms, discussing scenarios where LVQ is most effective Assess the
effectiveness of Counter propagation Networks in tasks involving high-
dimensional
data.
UNIT-3
1. Explain the core principles of Convolutional Neural Networks (CNNs)
and their significance in modern AI applications. (JAN/2022)
CNNs are designed to automatically and adaptively learn spatial hierarchies of
features from input data. They excel in tasks like image recognition due to their
ability to extract and learn meaningful features directly from raw data
2. Explain the rationale behind the Convolution Operation in CNNs and its role
in feature extraction from images.
It scans a filter (also known as kernel) over the input to extract features. This
operation allows the network to recognize local patterns in the input, enabling it to
learn hierarchical representations of features.
3. Compare and contrast Spiking Neural Networks with traditional artificial
neural networks(JAN/2021)
Spiking Neural Networks model neuron communication with spikes, which is more
bio- realistic but can be computationally expensive. Traditional ANNs use
continuous activation functions. SNNs are efficient for event-based processing,
while ANNs are more suitable for traditional continuous data processing
4. Evaluate the diverse applications of Third-Generation Neural Networks in
Image Compression.
Using networks like Variational Autoencoders (VAEs), Third-Generation NNs
have been applied to lossy image compression tasks, enabling more efficient
storage and transmission of images
PART-B
1. Describe the core principles of Convolutional Neural Networks (CNNs) and
their applications. JAN/2021
2. Analyze the neuro scientific principles incorporated into Spiking Neural
Networks and their potential benefits in AI research
UNIT-4
1. List out any four Application of Deep Learning. (Dec/2021)
a.Image Classification and Object Detection
b.Natural Language Processing (NLP)
c.Speech Recognition
d.Autonomous Driving
2. State the use of hidden layers. (JAN/2021)
Hidden layers in a neural network are used to learn and represent complex,
hierarchical features and patterns in data. They enable the network to model
non-linear relationships and capture abstract information from the input.
3. Why deep learning models are called feedforward?
Deep learning models are called feedforward because the data flows through the
network from input to output without any feedback loops or connections that create
cycles.
4. What is gradient Descent? (April/2021)
Gradient descent is an optimization algorithm used to minimize the cost function
of a machine learning model by iteratively adjusting the model's parameters in the
direction of steepest descent of the cost function.
5. What is cost function? give the formula.
The cost function (also known as loss function) quantifies the error or
mismatch between the model's predictions and the actual target values. A
common cost function for regression problems is Mean Squared Error (MSE),
represented as:
6. List the component of hidden layers.
Hidden layers consist of nodes (neurons), each with weights, biases, and an
activation function.
7. What is sigmoid function?
The sigmoid function is an activation function commonly used in neural networks.
It transforms the weighted sum of inputs into a range between 0 and 1.
Its formula is:
8. Difference between forward propagation and backward Propagation.
• Forward Propagation: It involves passing input data through the network to
generate predictions.
• Backward Propagation: It is the process of calculating gradients of the cost
function
with respect to the model's parameters, which is used for optimization during
training.
9. What is dataset augmentation?
Dataset augmentation is a technique in deep learning where new training examples
are created by applying various transformations (e.g., rotations, translations, flips)
to the existing data. It helps improve model generalization.
10.What is forward propagation?
Forward Propagation: It involves passing input data through the network to
generate
predictions.
11.What is backward Propagation?
Backward Propagation: It is the process of calculating gradients of the cost
function
with respect to the model's parameters, which is used for optimization during
training.
12.Define Regularization. (JAN/2022)
Regularization is a technique to prevent overfitting by adding a penalty term to the
cost function. It discourages the model from fitting the noise in the data.
Types: L1 and L2
13.Difference between L1 and L2 regularization.
• L1 Regularization adds the absolute values of the weights to the cost function.
• L2 Regularization adds the squared values of the weights to the cost function.
14.What are some advantages of using probabilistic models in deep learning?
• They can provide probabilistic predictions, which include uncertainty
information.
• They are suitable for handling data with inherent uncertainty.
• They allow Bayesian reasoning for model parameters.
15.Explain the potential benefits of using an ensemble of models created
through bagging. (Dec/2021)
Bagging can improve model robustness and accuracy by reducing variance. It
combines multiple models trained on different subsets of data. It does this by
taking random subsets of an original dataset, with substitute, and fits either a
classifier (for classification) or regressor (for regression) to each subset.
PART-B
1. Provide a step-by-step explanation of the backpropagation algorithm,
emphasizing the role of the chain rule in computing gradients.
2. Explain the concept and benefits of batch normalization in deep neural
networks. How does it address issues like internal covariate shift?
3. Provide an overview of generative adversarial networks (GANs) and their
applications in generating synthetic data
4. Explain the principles behind autoencoders and variational autoencoders
(VAEs) in unsupervised learning. How do they differ?
UNIT-5
1. What is the fundamental characteristic that distinguishes RNNs from
feedforward neural networks?
The fundamental characteristic that distinguishes Recurrent Neural
Networks (RNNs) from feedforward neural networks is their ability to
handle sequential data by maintaining hidden states that capture
temporal dependencies. Unlike feedforward networks, where information
flows in one direction, RNNs have recurrent connections that allow
information to cycle back into the network, making them suitable for tasks
involving sequences
2. Describe the purpose of the hidden state in an RNN. (JAN/2021)
The hidden state in an RNN serves as a memory that encodes information about
the past sequence elements. It allows the network to maintain context and capture
dependencies between elements in the sequence, making it crucial for sequential
data tasks
3. Provide an example of a task where Recursive Neural Networks are
particularly useful. (April/2021)
RecNNs are particularly useful for tasks involving syntactic or semantic
parsing of sentences, where the hierarchical structure of language needs to be
understood. For example, in constituency parsing, RecNNs can recursively build
parse trees for sentence