Advance Machine Learning Module 1 and Module 2
Short Questions 2 Marks
1. What is backpropagation and how is it used in neural networks?
2. How does regularization help to prevent overfitting in neural networks?
3. What is the role of the activation function in a neural network?
4. How can you prevent vanishing gradients in deep neural networks?
5. What is the difference between a convolutional neural network and a fully connected
neural network?
6. How do you decide on the number of hidden layers to use in a neural network?
7. What is the purpose of an activation function in a neural network?
8. What is the range of values that the sigmoid activation function can output?
9. What is the ReLU activation function and what are its advantages?
10. What is the drawback of using the tanh activation function?
11. How does the Leaky ReLU activation function differ from the regular ReLU activation
function?
12. What is the purpose of regularization in a neural network?
13. What are some common types of regularization used in neural networks?
14. How does L1 regularization differ from L2 regularization?
15. What is the role of the regularization parameter in a neural network?
16. What is dropout and how does it work in a neural network?
17. What is the advantage of using dropout in a neural network?
18. How does the dropout rate affect the performance of a neural network?
19. Can you use dropout during both training and testing phases of a neural network?
20. What is early stopping and how does it relate to regularization?
21. What is the difference between weight decay and dropout regularization
22. What is the purpose of a convolutional neural network (CNN)?
23. How does a convolutional layer in a CNN differ from a fully connected layer in a
traditional neural network?
24. What is a filter or kernel in a CNN?
25. How is max pooling used in a CNN?
26. What is the difference between stride and padding in a CNN?
27. What is the difference between a 1D, 2D, and 3D CNN?
28. What is a residual network (ResNet) and how does it improve upon traditional CNNs?
29. How is transfer learning used in CNNs?
Long Questions(10 Marks)
1.How does a convolutional layer in a CNN work, and what is the role of the filter/kernel and
stride? How are multiple filters used in a single convolutional layer, and what is the output of a
convolutional layer?
2.What is the purpose of pooling layers in a CNN, and how is max pooling used? How do
pooling layers help reduce the dimensionality of the input data in a CNN, and what are some
other types of pooling layers that can be used?
3.What is the role of activation functions in a CNN, and why are rectified linear units (ReLUs)
commonly used as activation functions in convolutional layers? How do ReLUs help avoid the
vanishing gradient problem that can occur in traditional neural networks?
4.What is (AlexNet), and how does it differ from traditional CNNs? How does the use of skip
connections in AlexNets help improve performance, especially for deeper networks?
5.What is transfer learning, and how can it be used to improve the performance of a CNN? How
are pre-trained models used for transfer learning, and what are some common tasks for which
transfer learning is used in computer vision?
6.What is a fully convolutional network (FCN), and how does it differ from a traditional CNN?
How are FCNs used for tasks such as image segmentation, and what are some common
architectures for FCNs?
7.What is a capsule network, and how does it differ from a traditional CNN? How do capsule
networks help address some of the limitations of traditional CNNs, especially in tasks such as
object recognition and pose estimation
1. Explain the basic architecture of CNN and how it differs from a
traditional neural network. What are the main components of a CNN?
2. Explain the concept of pooling in a CNN. What are the different
types of pooling? How do they help in reducing the spatial
dimensionality of the feature maps?
3. What is the purpose of activation functions in a CNN? Compare and
contrast the commonly used activation functions (e.g., ReLU,
sigmoid, tanh).
4. Discuss the challenges and limitations of using CNNs. What are
some common issues that may arise during training or testing of a
CNN? How can these be addressed?
5. Compare and contrast different CNN architectures (e.g., VGG,
ResNet, Inception). What are the differences in terms of architecture,
performance, and suitability for different tasks?
6. Compare MobileNet Architecture with GoogleNet. Discuss the
merits and limitations of MobileNet Architecture.
7. Discuss about the vanishing gradient problem in detail.
8. Differentiate between Deep and Shallow Network.
9. Explain Artificial Neural Network (ANN) with architecture.