0% found this document useful (0 votes)
126 views2 pages

Question Bank

The document outlines a comprehensive curriculum on Deep Learning, structured into five modules covering fundamentals, training techniques, dimensionality reduction, optimization, and advanced architectures like CNNs and RNNs. Key topics include the history of deep learning, various neural network models, optimization algorithms, autoencoders, and attention mechanisms. Additionally, it includes application-based questions that encourage higher-order thinking and practical implementation of deep learning concepts.

Uploaded by

bhumika honnalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views2 pages

Question Bank

The document outlines a comprehensive curriculum on Deep Learning, structured into five modules covering fundamentals, training techniques, dimensionality reduction, optimization, and advanced architectures like CNNs and RNNs. Key topics include the history of deep learning, various neural network models, optimization algorithms, autoencoders, and attention mechanisms. Additionally, it includes application-based questions that encourage higher-order thinking and practical implementation of deep learning concepts.

Uploaded by

bhumika honnalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Module 1: Fundamentals of Deep Learning

1. What are the key milestones in the history of Deep Learning?


2. Explain the working of a McCulloch-Pitts neuron model with an example.
3. Differentiate between perceptrons and multilayer perceptrons (MLPs).
4. Describe the perceptron learning algorithm and its limitations.
5. What is the significance of sigmoid neurons in neural networks?
6. Explain the representation power of feedforward neural networks.
7. How does a thresholding logic function in perceptrons?
8. Compare single-layer and multi-layer perceptrons in terms of learning capability.

Module 2: Training Deep Neural Networks

9. Explain the concept of gradient descent and its role in neural network training.
10. Compare Stochastic Gradient Descent (SGD), Momentum-based GD, and Nesterov
Accelerated GD.
11. What are AdaGrad, RMSProp, and Adam optimizers? How do they differ?
12. Describe the process of backpropagation in neural networks.
13. What are eigenvalues and eigenvectors? How are they used in neural networks?
14. Explain Principal Component Analysis (PCA) and its significance in deep learning.
15. How does Singular Value Decomposition (SVD) contribute to dimensionality
reduction?

Module 3: Dimensionality Reduction and Autoencoders

16. Explain how autoencoders are related to PCA.


17. What are different types of autoencoders, and how do they function?
18. How does a denoising autoencoder work? Provide an example.
19. What is the role of regularization in autoencoders?
20. Define and differentiate between sparse autoencoders and contractive autoencoders.
21. Explain the concept of Bias-Variance tradeoff and its impact on deep learning models.
22. Describe L2 regularization and early stopping techniques in training deep learning
models.
23. How does dataset augmentation help in deep learning model generalization?
24. Explain parameter sharing and tying in deep learning models.

Module 4: Optimization and Convolutional Neural Networks (CNNs)

25. How does injecting noise at input improve neural network performance?
26. What is the role of ensemble methods in deep learning?
27. Explain the concept of dropout and its impact on overfitting.
28. What is greedy layer-wise pretraining? How does it help in deep learning?
29. Compare different activation functions such as ReLU, Leaky ReLU, and Sigmoid.
30. Explain the working and architecture of Convolutional Neural Networks (CNNs).
31. Compare LeNet, AlexNet, and VGGNet in terms of architecture and performance.
32. What are the main features of ResNet and how does it solve the vanishing gradient
problem?
33. Explain the concepts of Guided Backpropagation and Deep Dream in CNN
visualization.
34. How do adversarial attacks fool CNN models?

Module 5: Recurrent Neural Networks (RNNs) and Attention Mechanisms

35. Explain the working of Recurrent Neural Networks (RNNs) with an example.
36. What is Backpropagation Through Time (BPTT), and how does it differ from
standard backpropagation?
37. Explain the problem of vanishing and exploding gradients in RNNs.
38. Compare GRUs and LSTMs in terms of architecture and advantages.
39. How do Encoder-Decoder models work in deep learning applications?
40. What is the attention mechanism, and how does it improve sequence modeling?
41. Explain the significance of attention over images in deep learning applications.
42. What are self-attention mechanisms, and how do they relate to Transformer models?

Additional Questions (Application-Based & Higher-Order Thinking)

43. Given a dataset with high dimensionality, how would you apply PCA and
autoencoders for feature extraction?
44. How would you optimize a deep learning model for image classification using CNNs?
45. Design an RNN-based model for real-time speech recognition and discuss the
challenges.
46. What strategies would you implement to improve the generalization of a deep neural
network?
47. How can transfer learning be applied using pre-trained deep learning models?
48. Compare Transformer models with traditional RNN-based architectures in NLP
applications.
49. How does deep learning contribute to real-world applications such as healthcare,
finance, and autonomous driving?
50. Propose a novel deep learning solution for solving a complex real-world problem.

You might also like