Deep Learning- Assignment
Assignment no 1
1. Compare and contrast ANN, CNN, and RNN in terms of structure, data
types they work with, and ideal use cases. (Analyze)[ write difference
between ANN,CNN & RNN- difference parameter must include structure,
data types they work with, and ideal use cases]
2. Explain the working of a basic Artificial Neural Network (ANN) with the
help of a neat diagram. Describe the role of each component in the
architecture. [Explain ANN in Detail](understand)
3. Analyze how different types of neural networks (CNN, RNN, SOM, AE)
handle different data types (images, sequences, patterns). Provide
relevant examples. [define cnn,rnn,som,ae – when they will be best used
and why with example] (analyze)
4. Create a comparison table highlighting key features, use-cases, and
limitations of ANN, CNN, RNN, SOM, AE, and BM. (create)
Assignment no 2
1. Define a perceptron. List the steps involved in developing a simple
perceptron.[Remember]
2. Compare the learning process of a simple perceptron with that of an
MLP [Analyze]
3. Justify the Need for Backpropagation in Training Multi-Layer
Perceptrons (MLP) [Analyze]
MCQ: Unit 1:
1. Deep learning became practical mainly due to:
Deep Learning- Assignment
a) Development of large datasets and GPUs ✔
b) Availability of small datasets only
c) Elimination of supervised learning
d) Reduction in storage cost only
2. In supervised deep learning, which of the following networks is
commonly used for image recognition?
a) Recurrent Neural Network (RNN)
b) Convolutional Neural Network (CNN) ✔
c) Self-Organizing Map (SOM)
d) Autoencoder (AE)
3. Which activation function outputs values in the range (0, 1)?
a) Tanh
b) Sigmoid ✔
c) ReLU
d) Softmax
4. Which activation function suffers from the vanishing gradient
problem?
a) Sigmoid ✔
b) ReLU
c) Leaky ReLU
d) Softmax
5. Which step is most important before training an ANN on a business
dataset?
a) Random guessing
b) Data preprocessing and normalization ✔
c) Ignoring missing values
d) Removing hidden layers
MCQ: Unit 2:
1. A Feedforward Neural Network (FNN) is called feedforward
because:
Deep Learning- Assignment
a) The network uses feedback loops
b) The data moves only in one direction—from input to output
c) It uses recurrent connections
d) It requires backpropagation always
Answer: b) The data moves only in one direction—from input to
output
2. Which error function is most suitable for binary classification?
a) Mean Squared Error (MSE)
b) Binary Cross-Entropy (BCE)
c) Categorical Cross-Entropy (CCE)
d) Hinge Loss
Answer: b) Binary Cross-Entropy (BCE)
3. Categorical Cross-Entropy (CCE) is typically used when:
a) Output has two classes
b) Output has multiple classes
c) The data has missing values
d) Training data is very small
Answer: b) Output has multiple classes
4. A limitation of Mean Squared Error (MSE) in classification is:
a) It converges too fast
b) It is not differentiable
c) It assumes linearity and struggles with probability distributions
d) It requires dropout to function
Answer: c) It assumes linearity and struggles with probability
distributions
5. Backpropagation works mainly by:
a) Randomly adjusting weights
b) Propagating error backwards to update weights
c) Eliminating hidden layers
Deep Learning- Assignment
d) Normalizing input features
Answer: b) Propagating error backwards to update weights
6. Early Stopping prevents overfitting by:
a) Adding noise to training data
b) Stopping training when validation error starts increasing
c) Freezing some neurons in hidden layers
d) Increasing learning rate
Answer: b) Stopping training when validation error starts increasing