0% found this document useful (0 votes)
56 views3 pages

QuestionBank C# and

Uploaded by

aamith6789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views3 pages

QuestionBank C# and

Uploaded by

aamith6789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Deep Learning Question Bank

UNIT I
1. What is deep learning, and how is it different from traditional machine learning?
2. Briefly discuss the major milestones in the history of deep learning.
3 Discuss Probabilistic Supervised Learning.
4 Explain the steps involved in Principal Components Analysis.
5 Define Machine Learning. Explain the different types of Machine Learning Algorithms.
6 Illustrate Linear Regression with suitable example.
7 Discuss Support Vector Machine in detail .
8 Discuss different kinds of tasks that can be solved by machine learning.
9 Illustrate Logistic Regression with suitable example.
10 Explain K-means clustering with suitable example.
11 Explain different quantitative measures to evaluate the performance of machine learning
algorithm.
12 Differentiate supervised and unsupervised machine learning with suitable examples.
13 Explain the working of Support Vector Machine with suitable example.

UNIT II
1 Describe Deep feed forward networks.
2 Explain sigmoid units for Bernoulli Output Distributions.
3 Explain hidden units of feed forward networks.
4 Describe Logistic Sigmoid and Hyperbolic Tangent.
5 Justify the importance of Rectified linear units in Hidden units.
6 Explain output units of feed forward networks.
7 Explain different set of layers in Feed forward networks.
8 Describe regularization for deep learning.
9 Illustrate semi supervised learning with suitable example.
10 Discuss in detail about chain rule of calculus.
11 List and explain the various activation functions used in modeling of artificial neuron.
12 Discuss the working of the back propagation algorithm.
13 Explain the significance of the input layer, hidden layers, and output layer in a
feedforward network.
14 What are the computational challenges of backpropagation in very deep networks?

15 How does the vanishing gradient problem affect the training of deep feedforward
networks?
16 Compare backpropagation with automatic differentiation. How are they related?
17 Define L1 and L2 regularization. How do they differ, and when would you use each?
18 What is dropout, and how does it regularize neural networks?
19 Describe how data augmentation can serve as a form of regularization.
20 Explain early stopping and its role in preventing overfitting.
21 Discuss the use of batch normalization as a regularization method.
22 Discuss how noise injection in the input or gradients can help regularize deep learning
models.

UNIT III
1. Define empirical risk minimization (ERM). How is it different from expected risk
minimization?Write the mathematical formulation of empirical risk minimization. What
does each term represent?What are the limitations of empirical risk minimization in deep
learning?
2. How does parameter initialization affect the convergence speed of optimization
algorithms?
3 Describe the main difference between RMSProp and AdaGrad.
4 Write the mathematical update rule for RMSProp. What is the role of the decay rate
parameter?
5 Discuss the challenges in optimization of training deep learning models.
6 Write and explain the steps in Stochastic gradient descent algorithm.
7 Compare and contrast AdaGrad and RMSProp algorithms.
8 Explain different parameter initialization strategies.
9 Write and explain the steps in RMSProp algorithm.
10 Illustrate cliffs and exploding gradient with suitable example.
11 Explain different parameter initialization strategies.
12 Assess Computational graphs with necessary diagrams.
13 Explain stochastic gradient descent algorithm.
14 Illustrate the derivative function used in gradient descent algorithm.
15 Compare and contrast AdaGrad and RMSProp algorithms.
16 Compare SGD, AdaGrad, and RMSProp in terms of convergence and suitability for
deep learning.
17 Discuss how problem-specific characteristics (e.g., data sparsity, noise) influence the
choice of optimization algorithm.
18 What is the impact of adaptive learning rate algorithms on training stability and
convergence?
19 How do hybrid optimization strategies (e.g., SGD with momentum and adaptive
learning rates) improve training efficiency?

UNIT IV
1 Evaluate the working learned invariances with necessary example and diagram.
2 Construct a graphical demonstration for sparse connectivity and explain it in detail.
3 Illustrate Equivariant representation.
4 Explain the following with suitable diagram.
i. Sparse interactions.
ii. Parameter sharing.
5 Describe Pooling with suitable example.
6 Discuss in detail the variants of the Basic Convolution Function.
7 Construct graphical demonstration for sparse connectivity and explain it in detail.
8 Illustrate pooling stage in convolutional network.
9 Explain variants of the basic convolution function.
10 Explain in detail components of CNN model.
11 Differentiate locally connected layers, tiled convolution and standard convolution with
suitable examples and diagram.
12 Explain different formats of data that can be used with convolutional networks.
13 Compare max pooling and average pooling. When might each be preferred?
14 How does pooling contribute to translation invariance in CNNs?
15 What are the potential downsides of pooling in terms of information loss?
16 How do convolution and pooling operations help reduce the number of parameters
compared to fully connected networks?

17 Describe the architecture of LeNet. How do its components work together?


18 Describe the architecture of AlexNet. How did it improve upon earlier networks like
LeNet?
19 Explain the concepts of stride and padding in convolutional layers. How do they affect
the output dimensions?
20 What is the significance of weight sharing in convolutional networks?

UNIT V
1 Explain how the unfolding of a computational graph relates to the training process of
RNNs.
2 What are bidirectional recurrent neural networks (BRNNs), and how do they differ from
standard RNNs?
3 How are deep recurrent networks constructed, and why might deeper architectures be
beneficial?
Describe the architecture of an LSTM cell, including its gates and memory cell.
4 What are gated recurrent units (GRUs), and how do they differ from LSTMs?
5 Describe Unfolding Computational Graphs.
6 Explain how to compute the gradient in a Recurrent Neural Network.
7 Explain different steps involved in Natural Language Processing.
8 Discuss Recurrent Neural Networks in detail.
9 Explain the different types of speech recognition systems.
10 Explain the following:
i) Natural Language Processing
ii) Speech Recognition
iii) Computer Vision

You might also like