0% found this document useful (0 votes)
19 views4 pages

Unit II

The document provides an introduction to deep learning, covering key concepts such as neural networks, perceptrons, supervised vs. unsupervised learning, and various techniques like backpropagation and gradient descent. It also discusses challenges like overfitting, the vanishing gradient problem, and methods for improvement such as regularization and transfer learning. Additionally, it highlights applications of deep learning, popular frameworks, and includes questions for further exploration of the topic.

Uploaded by

successtrbtet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views4 pages

Unit II

The document provides an introduction to deep learning, covering key concepts such as neural networks, perceptrons, supervised vs. unsupervised learning, and various techniques like backpropagation and gradient descent. It also discusses challenges like overfitting, the vanishing gradient problem, and methods for improvement such as regularization and transfer learning. Additionally, it highlights applications of deep learning, popular frameworks, and includes questions for further exploration of the topic.

Uploaded by

successtrbtet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

UNIT –II

INTRODUCTION TO DEEP LEARNING

1. What is Deep Learning?


A subset of machine learning that uses multi-layer neural networks to learn complex
patterns from large data sets independently.
2. Define a Neural Network.
A computational model inspired by the brain, consisting of interconnected nodes
(neurons) organized in layers
3. What makes a network “deep”?
Having multiple hidden layers (typically more than two), enabling hierarchical feature
learning
4. Explain a Perceptron.
A basic neural unit that computes a weighted sum of inputs and applies an activation
function
5. Differentiate between single-layer and multi-layer perceptron.
Single-layer perceptron solves only linearly separable problems; MLP with hidden layers
handles non-linear patterns
6. What is supervised vs. unsupervised learning?
Supervised uses labeled data; unsupervised finds patterns in unlabeled data Medium.
7. Define Backpropagation.
Algorithm to compute gradients of loss and update weights layer by layer using gradient
descent
8. What is Gradient Descent?
Iterative optimization to minimize loss by updating parameters opposite to gradient
direction.
9. What is a Loss Function?
A measure of difference between predictions and actual labels, guiding training .
10. Why are Activation Functions necessary?
They introduce non-linearity so networks can model complex relationships .
11. Name popular activation functions.
Common ones include sigmoid, tanh, ReLU, and softmax
12. What is ReLU and why is it preferred?
Outputs zero for negative inputs and linear for positives—simpler, speeds convergence,
reduces vanishing gradients .
13. Define an Epoch.
One complete pass through the entire training
14. Explain Data Normalization.
Scaling features (e.g., zero mean, unit variance) to stabilize and accelerate training.
15. What is Overfitting?
When a model learns noise in training data, performing poorly on new data
16. Define Regularization.
Techniques like L1/L2 or dropout that prevent overfitting by penalizing complexity .
17. What is Dropout?
Randomly disables neurons during training to reduce co-adaptation and improve
generalization
18. What is Batch Normalization?
Normalizes layer inputs per batch to stabilize distribution and speed up training (common
technique)
19. Define a Deep Belief Network (DBN).
A generative model built from stacked Restricted Boltzmann Machines for unsupervised
learning .
20. What is the Vanishing Gradient Problem?
Gradients become very small in deep networks, slowing learning of early layers
21. What is Transfer Learning?
Reusing pre-trained network weights for a new task to reduce data and time needs .
22. When prefer Deep Learning over traditional ML?
For complex tasks and unstructured data (images, speech), yielding better performance
with large datasets
23. What hardware is suited for Deep Learning?
GPUs and TPUs accelerate training through parallel computation
24. List key applications of Deep Learning.
Computer vision, NLP, speech recognition, self-driving cars, game playing
25. Name popular Deep Learning frameworks.
TensorFlow, PyTorch, Keras

II. Five questions

1. Explain the evolution from machine learning to deep learning.


2. Describe the architecture of an artificial neuron and how perceptrons work.
3. Compare single-layer perceptrons with multi-layer perceptrons (MLPs).
4. Explain the forward and backward pass in a deep neural network.
5. What are activation functions? Describe sigmoid, tanh, and ReLU, including their
advantages/disadvantages.
6. Derive and explain gradient descent and its variants (batch, stochastic, mini-batch).
7. Define overfitting. How do techniques like regularization, dropout, and early stopping
mitigate it?
8. Explain batch normalization and its advantages.
9. Define a Deep Belief Network (DBN) and contrast it with standard feedforward
networks.
10. Describe the vanishing/exploding gradient problem. How do architectures like ResNet
help address it?
11. Explain the concept and benefits of transfer learning in deep networks.
12. Discuss the importance of dataset splitting: training, validation, test sets, plus cross-
validation.
13. Explain the bias-variance trade-off in deep learning models.
14. Describe key components and structure of a convolutional neural network (CNN).
15. Illustrate the role of GPUs for deep learning and how parallel processing boosts training.
16. Discuss common loss functions (cross-entropy, mean squared error). When to use each?
17. Explain hyperparameters in deep learning and describe tuning strategies (grid, random
search).
18. What are deep generative models? Contrast autoencoders, VAEs, and GANs.
19. Explain attention mechanisms and their growing role in vision models (e.g., Vision
Transformers).
20. How did AlexNet change the course of computer vision? Highlight its architecture and
21. Compare Inception (GoogLeNet) to basic CNNs—key innovations and challenges.
22. Explain class activation mapping (CAM) and its role in explainable AI.
23. Discuss practical applications of deep learning in computer vision.
24. Explain reinforcement learning basics and how deep networks enhance RL agents.
25. List popular deep learning frameworks and compare their features (TensorFlow),

You might also like