0% found this document useful (0 votes)
12 views20 pages

Ann DL

This document provides an overview of deep learning, including its definition, growth, and various applications such as image classification and natural language processing. It details the types of neural networks, key components, hyperparameters, and training methods like forward and backward propagation. Additionally, it discusses activation functions and optimizers used in neural networks.

Uploaded by

nitinpandey.dev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

Ann DL

This document provides an overview of deep learning, including its definition, growth, and various applications such as image classification and natural language processing. It details the types of neural networks, key components, hyperparameters, and training methods like forward and backward propagation. Additionally, it discusses activation functions and optimizers used in neural networks.

Uploaded by

nitinpandey.dev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Introduction to Deep Learning

Where Deep Learning Exists?


What is Deep Learning?
Why is Deep Learning Growing?
Applications of deep learning :-
Image Classification
Object Detection and Segmentation
Recommendation Systems
Autonomous Vehicles
Natural Language Processing (NLP)
Speech Recognition
Machine Learning working Flow Chart:-
Feature
Data Import Train Model prediction
Extraction

Deep Learning working Flow Chart:-

Data Import Accuracy


Types of neural network models:-

Feedforward Neural Network (FNN)


Artificial Neural Network (ANN)
Convolutional Neural Network (CNN)
Recurrent Neural Network (RNN)
Long Short-Term Memory (LSTM) Network
Neural network working procedure:-
❑ Key Components of a Neural Network:

1.Input Layer
2.Hidden Layers
3.Output Layer
4.Activation Functions
5.Weights
6.Biases
7.Loss Function
8.Optimizer
❑ Hyperparameters of a Neural Network:

1.Learning Rate
2.Number of Hidden Layers
3.Number of Neurons in Hidden Layers
4.Activation Function for Hidden Layers
5.Batch Size
6.Epochs
7.Regularization
8.Dropout Rate
How to train Neural Network with Forward Propagation and Backward
Propagation:-

Forward Propagation:-
Example:- Play Study Sleep O/P
2h 4h 8h 1

Y=actual output
Ŷ=0
Ŷ=predicted output
Loss= (Ŷ-Y)
Loss=(1-0)
=1
Backward Propagation:-
❑ Activation Function :-

An activation function is a mathematical function applied to the output of a neuron in a neural


network. Its helps the neuron to switch ON/OFF.

❖ Types of Activation Function:-

1.Sigmoid Function (Logistic Function)


2.Hyperbolic Tangent (Tanh) Function
3.Rectified Linear Unit (ReLU)
4.Leaky ReLU
❑Sigmoid / Logistic Activation Function:-

Advantages of sigmoid activation


function:-
1) Its derivative is more steep which
means it can get more value.
2) generally it is used in output layer for
binary classification time.

Disdvantages of sigmoid activation


function:-
1) create an vanishing gradient problem.
2) create an exploding gradient problem.
❑ RELU Activation Function:-
Advantages of RELU activation
function:-
1) Relu is faster than sigmoid function and
its derivative is faster to compute.
2) it does not active all the neurons at
same time.
3) Rectifies the vanishing gradient
problem.

Disdvantages of RELU activation


function:-
1) it suffers a problem dying Relu’s.
❑ Leaky-RELU Activation Function:-
Advantages of Leaky-RELU
activation function:-
1) It’s fixes the ”dying relu”
problem
2) it speeds up training.

Disadvantages of Leaky-RELU
activation function:-
1)Leaky-RELU is inmortal.
Optimizer :-
Optimizers or algorithms or methods used to change the attributes of your neural network such as
weights and learning rate in order to reduce the losses.

1.Gradient Descent optimizer


2.Mini-batch Gradient Descent
3.Adam (Adaptive Moment Estimation)
4.RMSprop (Root Mean Square Propagation)
5.Adagrad (Adaptive Gradient Algorithm)
6.Adadelta
7.Nadam
Steps:-
1) Compute the
gradient or slope
2) Make a step in
the direction
opposite to the
gradient
Drop Out:-

You might also like