0% found this document useful (0 votes)
10 views29 pages

Report Nnanddl

Uploaded by

hemantjoshi095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views29 pages

Report Nnanddl

Uploaded by

hemantjoshi095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

1

Sobhasaria Group of Institutions, Sikar


DEPARTMENT OF INFORMATION SCIENCE & ENGINEERING

CERTIFICATE

Certified that the Seminar on topic ………………………………………. has been successfully presented by ………,
bearing University Roll No. ………………...,in partial fulfillment of the requirements for the degree of
Bachelor of Engineering in CSE of Bikaner Technical University , Bikaner during academic year 2023-
2024. The Seminar report has been approved as it satisfies the academic requirements in respect of
Seminar work for the said degree.

Mr. ……………………. Mr. Dileep Kumar Agarwal

IT Lab Coordinator HOD, Department of CSE Assistant Professor (CSE)

DECLARATION

2
I, ………. , student of III/V/VII Semester B.Tech, in Computer Science and Engineering, Sobhasaria Group
of Institutions Sikar hereby declare that the Seminar entitled “ ” has been carried out by me
and submitted in partial fulfillment of the requirements for the IV year degree of Bachelor of Technology
in Computer Science and Engineering of Bikaner Technical University, Bikaner during academic year
2024-2025.

Date : .. name

Place : Sikar(Rajasthan) roll no…….

3
AKNOWLEDGEMENT

This is opportunity to express my heartfelt words for the people who were part of this seminar in
numerous ways, people who gave me unending support right from beginning of the Seminar.

I am grateful to seminar coordinators Mr and

Mr…………………... for giving guidelines to make the seminar successful. Without their guidance and
persistent help this report would not have been possible. I must acknowledge the faculties and staffs
of Computer Science Engineering from Sobhasaria Group of Institutions, Sikar.

I extend my thanks to Mr. Dileep K Agarwal Head of the Department for his cooperation and
guidance.

I want to give sincere thanks to the principal, Dr. L. Solanki for his valuable support.

Yours Sincerely, (Student Name) University Roll No

4
5
INTRODUCTION TO NEURAL NETWORKS AND DEEP LEARNING

Machine learning is a broad field in the field of AI (artificial intelligence) that focuses on
developing algorithms and models that can learn from data, identify patterns, and make
predictions or decisions. Based on this, neural networks are a subset of machine learning
inspired by biological neurons. They are algorithmic models consisting of artificial neurons
organized in layers. But traditional neural networks (such as multilayer perceptrons) have
limited ability to solve complex problems, which has led to the development of deep
learning. It follows that deep learning is a subfield of neural networks that focuses on deep
(multiple-layer) neural networks. One of the key ideas behind deep learning is the use of
multiple layers to extract more abstract features from data. Deep neural networks can
automatically learn representations of data at different levels of abstraction. This allows
them to effectively solve complex problems in areas such as computer vision, natural
language processing and others.

Fig:

 Artificial Intelligence (AI): This is like a broad umbrella term for making machines smart. Think of
AI as teaching machines to do tasks that typically require human intelligence, like understanding
speech or playing chess.

 Machine Learning (ML): This is a specific branch of AI. It's about teaching machines to learn from
data. Imagine showing a machine tons of pictures of cats until it learns to recognize a cat on its
own.

 Deep Learning: A deeper dive into ML, where machines learn from large amounts of data using
structures called neural networks. It's like teaching a machine to think through many layers,
similar to how our brain works.

6
 Neural Network: This is the backbone of deep learning. It's a system of algorithms that mimics
the human brain’s neurons. Each "neuron" processes a little piece of information, and together,
they solve complex tasks.

Chapter 1
NEURAL NETWORKS

1.1 What is a Neural Network?


A neural network is a computational model designed to simulate the way human brains
process information. Inspired by the structure and function of the brain's neurons, artificial
neural networks (ANNs) consist of interconnected nodes (neurons) that work together to
solve complex problems. These networks are capable of learning from data, making
decisions, and recognizing patterns, which makes them a foundational element of modern
artificial intelligence (AI) and machine learning (ML).

1.2Components of Neural Networks


1. Neurons:
- Nodes: The basic units of a neural network, each node represents a neuron that
processes and transmits information.
- Weights: Connections between neurons have weights that are adjusted during training to
minimize errors and optimize the network's performance.

Fig 1.2 :- neurons

2. Layers:
- Input Layer: Receives the initial data and passes it to the hidden layers.

7
- Hidden Layers: Consists of one or more layers that transform the input data through a
series of computations.
- Output Layer: Produces the final output of the network, typically representing the
predicted values or classificationns.

Fig 1.2.2 :- layers

3. Activation Functions:
- Functions applied to the output of each neuron to introduce non-linearity, enabling the
network to learn complex patterns. Common activation functions include Sigmoid, Tanh, and
ReLU (Rectified Linear Unit).
4. Bias:
- Added to the weighted sum of inputs to each neuron to shift the activation function,
improving the network's flexibility and accuracy.

8
Chapter 2
Types of Neural Networks

1. Feedforward Neural Network (FNN)

Description:
The simplest type of neural network where data flows in one direction—from the input layer
to the output layer.
Example:
 Predicting house prices based on features like size, location, and age.
Advantages:
 Simple to design and implement.
 Effective for tasks with structured data.
Disadvantages:
 Cannot handle sequential data or memory-based tasks.
 Limited capacity for complex tasks.
Applications:
 Regression tasks.
 Basic classification (e.g., email spam detection).
Usage:
 Predicting continuous values or binary/multi-class classification.
2. Convolutional Neural Network (CNN)
Description:
Specialized for processing grid-like data such as images by using convolutional layers.
Example:
 Image classification, such as recognizing cats and dogs in photos.
Advantages:
 Excellent at feature extraction from images.

9
 Reduces the number of parameters using pooling layers.
Disadvantages:
 Computationally intensive.
 Requires a large amount of labeled data.
Applications:
 Facial recognition.
 Medical image analysis.
Usage:
 Computer vision tasks like object detection and image segmentation.

3. Recurrent Neural Network (RNN)


Description:
Handles sequential data by maintaining memory of past inputs.
Example:
 Predicting the next word in a sentence.
Advantages:
 Effective for time-series and sequential data.
 Can process variable-length inputs.
Disadvantages:
 Struggles with long-term dependencies (vanishing gradient problem).
 Computationally expensive.
Applications:
 Language translation.
 Stock market prediction.
Usage:
 Speech recognition and text generation.
4. Long Short-Term Memory (LSTM)
Description:
A type of RNN designed to handle long-term dependencies by using memory cells.

10
Example:
 Predicting weather conditions based on historical data.
Advantages:
 Solves the vanishing gradient problem.
 Handles long sequences effectively.

Disadvantages:
 Computationally heavy.
 Slower to train.
Applications:
 Text summarization.
 Video analysis.
Usage:
 Sequential data analysis with long-term dependencies.

5. Transformer Neural Network


Description:
Uses attention mechanisms to focus on important parts of input sequences.
Example:
 Language translation using models like BERT or GPT.
Advantages:
 Handles very long sequences.
 Parallelizable for faster training.
Disadvantages:
 Requires large datasets.
 High computational cost.
Applications:
 Chatbots (e.g., OpenAI's ChatGPT).
 Document summarization.

11
Usage:
 Natural Language Processing (NLP) tasks.
6. Autoencoder
Description:
Unsupervised neural network used for dimensionality reduction or feature extraction.
Example:
 Reducing the dimensionality of customer data for clustering.

Advantages:
 Reduces noise in data.
 Learns efficient data representations.
Disadvantages:
 Sensitive to input data quality.
 Limited interpretability.
Applications:
 Image compression.
 Anomaly detection.
Usage:
 Data preprocessing and denoising.

7. Generative Adversarial Network (GAN)


Description:
Consists of two networks (generator and discriminator) competing to improve each other.
Example:
 Generating realistic human faces.
Advantages:
 Produces high-quality synthetic data.
 Can mimic data distributions.
Disadvantages:

12
 Training can be unstable.
 Requires careful tuning.
Applications:
 Creating synthetic images or videos.
 Enhancing image resolution.
Usage:
 Art creation and data augmentation.

8. Radial Basis Function Network (RBFN)


Description:
Uses radial basis functions as activation functions for classification or regression tasks.
Example:
 Predicting customer churn in businesses.
Advantages:
 Effective for interpolation tasks.
 Fast training.
Disadvantages:
 Limited scalability.
 Sensitive to parameter selection.
Applications:
 Function approximation.
 Signal processing.
Usage:
 Classification with small datasets.

9. Self-Organizing Map (SOM)


Description:
Unsupervised network used for visualizing and clustering high-dimensional data.
Example:

13
 Clustering customer segments in marketing.
Advantages:
 Reduces dimensionality effectively.
 Easy to visualize clusters.
Disadvantages:
 Limited to clustering tasks.
 Requires careful tuning of parameters.
Applications:
 Market research.
 Feature selection.
Usage:
 Understanding high-dimensional data.

10. Reinforcement Learning Neural Network


Description:
Trains an agent to make sequential decisions using rewards and penalties.
Example:
 Teaching a robot to navigate a maze.
Advantages:
 Learns optimal strategies through exploration.
 Adaptive to dynamic environments.
Disadvantages:
 Requires significant training time.
 May converge to suboptimal solutions.
Applications:
 Game playing (e.g., AlphaGo).
 Robotics and autonomous systems.
Usage:
 Problems requiring decision-making under uncertainty.

14
Fig 2.1 :- types of nural network

15
Chapter 3
3.1How Neural Networks Work
 Forward Propagation:
- Input data is passed through the network's layers. Each neuron processes its inputs using
a weighted sum and an activation function, producing an output that becomes the input for
the next layer.

 Backward Propagation (Backpropagation):


- During training, the network's output is compared to the desired output, and an error is
calculated. The network then adjusts its weights using an optimization algorithm (like
gradient descent) to minimize this error.

3.2Training Neural Networks


 Data Preparation:
- Data is divided into training, validation, and test sets. Training data is used to train the
network, validation data to tune hyperparameters, and test data to evaluate performance.

 Weight Initialization:
- Weights are initialized randomly or using specific methods (like He or Xavier initialization)
to break symmetry and accelerate learning.

 Learning Rate:
- A crucial hyperparameter that determines the step size for weight updates. Too high can
cause instability, while too low can slow convergence.

 Epochs and Batch Size:


- Epochs: One complete pass through the entire training dataset.
- Batch Size: The number of training samples processed before the model's weights are
updated. Smaller batches require more memory but provide more frequent updates.

 Optimization Algorithms:

16
- Algorithms like Stochastic Gradient Descent (SGD), Adam, and RMSprop are used to
update the weights and biases to minimize the error.

3.3Applications of Neural Networks

 Image Recognition:
- Used in facial recognition, object detection, and medical imaging to identify patterns and
features within images.

 Natural Language Processing (NLP):


- Powers applications like chatbots, language translation, and sentiment analysis by
understanding and generating human language.

 Speech Recognition:
- Converts spoken language into text, enabling voice-activated systems and transcription
services.

 Autonomous Vehicles:
- Helps in object detection, path planning, and decision-making processes for self-driving
cars.

17
Chapter 4
4.1Advantages of Neural Networks

 Learning Capability:
- Can learn and model complex, non-linear relationships within data.
 Adaptability:
- Adjust to changes in the input pattern, making them versatile across various applications.
 Data Versatility:
- Effective with various data types, including images, audio, and text.
 Parallel Processing:
- Capable of handling multiple inputs simultaneously, making them efficient for large-scale
data.
4.2Challenges and Limitations

18
 Data Dependency:
- Requires large datasets for effective training, which can be expensive and time-
consuming to gather.

 Computational Requirements:
- High computational cost and energy consumption, necessitating powerful hardware like
GPUs.

 Interpretability:
- Often considered a "black box" due to the difficulty in understanding and interpreting the
decision-making process.

 Overfitting
- Risk of the network being too tailored to the training data, resulting in poor performance
on unseen data.

 Hyperparameter Tuning:
- Selecting the optimal architecture and hyperparameters (such as learning rate, number of
layers, and activation functions) often requires expertise and extensive experimentation.

 Sensitivity to Input Data:


- Neural networks can be sensitive to the quality and nature of input data. Noise or
irrelevant features in the data can adversely affect performance.

4.4Example: Image Recognition with Convolutional Neural Networks (CNN)


4.4.1.nput Image:
- An image is fed into the network, typically resized to a standard dimension (e.g., 224x224
pixels).

4.4.2Convolutional Layers:
- Feature extraction is performed using convolutional layers, which apply filters to detect
edges, textures, and other image features.

4.43. Pooling Layers:


- Reduce the spatial dimensions of the feature maps, retaining essential information while
reducing computational complexity.

4.4.4 Fully Connected Layers:


- These layers take the flattened output of the convolutional and pooling layers and make
final predictions based on the extracted features.

4.4.5Output:
- The network produces a probability distribution over possible classes, indicating the likelihood of
the input image belonging to each class (e.g., dog, cat, car).

19
Chapter 5

Conclusion

Neural networks represent a remarkable advancement in artificial intelligence, offering


robust solutions for a wide range of complex problems. Their ability to learn from data,
adapt, and generalize makes them indispensable in many fields, from healthcare to finance.
Despite their challenges and limitations, ongoing research and technological advancements
continue to enhance their capabilities and applications, promising even greater impact in the
future.

20
Chapter 1

Deep Learning

1.1 What is Deep Learning?

Deep learning is a subset of machine learning, which itself is a subset of artificial intelligence
(AI). It is inspired by the structure and function of the human brain, using artificial neural
networks (ANNs) with multiple layers (hence "deep") to model and solve complex problems.
Deep learning has revolutionized fields like computer vision, natural language processing,
robotics, and healthcare by enabling computers to learn from vast amounts of data .

1.1.1Definition of Deep Learning

Deep learning involves training neural networks with many layers, allowing the system to
automatically discover intricate patterns and representations in data. The "deep" in deep
learning refers to the depth of these layers, which can number in the hundreds or
thousands.
 Unlike traditional machine learning, where feature engineering (manual extraction of
relevant features) is often necessary, deep learning models learn features
automatically from raw data.

1.2Components of Deep Learning

a) Neural Networks:
o Deep learning models are built upon artificial neural networks, which consist of layers of
interconnected nodes (neurons).
o Each layer transforms the input data in a non-linear way, allowing the network to learn
complex representations.
b) Layers:
o Input Layer: The initial layer that receives the raw data.

21
o Hidden Layers: Multiple layers between input and output that perform various
transformations. The depth (number of hidden layers) is what makes the network
"deep."
o Output Layer: The final layer that provides the prediction or classification.
c) Activation Functions:
o Functions that introduce non-linearity to the model, enabling it to learn intricate
patterns. Common activation functions include Sigmoid, Tanh, and ReLU (Rectified Linear
Unit).
d) Weights and Biases:
o Parameters that are adjusted during training to minimize the error in predictions.
Weights determine the strength of connections between neurons, while biases adjust
the output along with the weighted sum of inputs.

e) Backpropagation:
o A key algorithm used to train deep learning models. It involves propagating the error
from the output layer back through the network to update weights and biases,
minimizing the overall error.

1.3 How Deep Learning Works

a) Forward Pass:
o Input data passes through the network layer by layer. Each neuron applies a weighted
sum to its inputs, adds a bias, and passes the result through an activation function.
b) Loss Function:
o Measures the difference between the predicted output and the actual target. Common
loss functions include Mean Squared Error (MSE) for regression tasks and Cross-Entropy
Loss for classification tasks.
c) Backpropagation:
o After the forward pass, the loss is computed and backpropagated through the network.
Gradients of the loss with respect to each weight are calculated, and weights are
updated using an optimization algorithm such as Stochastic Gradient Descent (SGD) or
Adam.
d) Training:

22
o The model is trained over multiple iterations (epochs), where each epoch consists of a
forward pass and backpropagation. The process continues until the model converges,
meaning the loss no longer significantly decreases.

1.4Types of Deep Learning Architectures

a) Feedforward Neural Networks (FNN):


o The simplest type of deep learning model, where data flows unidirectionally from input
to output. Used for tasks like image and text classification.
b) Convolutional Neural Networks (CNN):
o Specially designed for processing grid-like data such as images. They use convolutional
layers to detect spatial hierarchies of patterns in images.
o Example: Image classification and object detection, like recognizing faces in photos.
c) Recurrent Neural Networks (RNN):
o Suitable for sequential data, where connections between nodes form a directed graph
along a sequence. Used for tasks like language modeling and time-series forecasting.
o Example: Predicting the next word in a sentence or stock price prediction.
d) Long Short-Term Memory Networks (LSTM):
o A type of RNN that can capture long-term dependencies by avoiding the vanishing
gradient problem through the use of gating mechanisms.
o Example: Language translation and speech recognition.
e) Generative Adversarial Networks (GANs):
o Consist of two networks—a generator that creates data and a discriminator that
evaluates data. They are trained in a competitive setting to improve the quality of
generated data.
o Example: Generating realistic images or videos.
f) Autoencoders:
o Unsupervised learning models used for dimensionality reduction and feature learning.
Consist of an encoder that compresses data and a decoder that reconstructs data.
o Example: Anomaly detection by comparing input data to reconstructed data.

23
g) Transformer Networks:
o Use self-attention mechanisms to handle dependencies and relationships in data
sequences. Highly effective for natural language processing tasks.
o Example: Machine translation and text summarization.

Chapter 2

2.1Advantages of Deep Learning

a. High Accuracy
Deep learning models achieve state-of-the-art performance in many tasks, outperforming
traditional machine learning methods.
b. Automatic Feature Extraction
Reduces the need for manual feature engineering, as models learn features directly from
data.
c. Handles Complex Problems
Capable of modeling non-linear relationships and solving problems involving unstructured
data like images, audio, and text.
d. Scalability
Can process large-scale data effectively.

2.2 Disadvantages of Deep Learning

a) Data Requirements:
o Deep learning models require large amounts of labeled data for effective training, which
can be difficult and expensive to obtain.
b) Computational Resources:
o Training deep networks requires significant computational power, often relying on
specialized hardware such as GPUs and TPUs.

24
c) Interpretability:
o Deep learning models are often considered "black boxes" because their decision-making
processes are not easily interpretable.
d) Overfitting:
o Can overfit to the training data, leading to poor generalization on unseen data.
Techniques like dropout and regularization are used to mitigate this.
e) Hyperparameter Tuning:
o Finding the optimal set of hyperparameters (e.g., learning rate, number of layers)
requires extensive experimentation and expertise.

2.3 Characteristics of Deep Learning

a. Representation Learning
Deep learning excels at hierarchical representation learning, meaning it learns simple
patterns in early layers and increasingly complex patterns in deeper layers.
b. End-to-End Learning
Unlike traditional approaches, where intermediate steps like feature extraction are manually
performed, deep learning models can directly learn from raw inputs to outputs.
c. Scalability
Deep learning models perform better as the size of the dataset increases, making them ideal
for big data applications.

2.4 Examples of Deep Learning


Example 2.4.1: Image Classification
 Task: Classify images of cats and dogs.
 Model: Convolutional Neural Network (CNN).
 Process:
1. Feed raw pixel data into the model.
2. CNN learns to detect edges, textures, and shapes across layers.
3. Output layer classifies the image as "cat" or "dog."
Example 2.4.2 : Language Translation

25
 Task: Translate English sentences into French.
 Model: Transformer network (e.g., Google’s BERT).
 Process:
1. Tokenize the input text.
2. Use attention mechanisms to focus on relevant words.
3. Output translated sentences.

Chapter 3
3.1 Applications of Deep Learning
a. Healthcare
 Example: Diagnosing diseases from medical imaging (e.g., X-rays, MRIs).
 Impact: Improves accuracy and early diagnosis.
b. Autonomous Vehicles
 Example: Detecting objects like pedestrians and traffic signs.
 Impact: Enables safe navigation.
c. Natural Language Processing (NLP)
 Example: Sentiment analysis, chatbots, and virtual assistants.
 Impact: Improves communication between humans and machines.
d. Entertainment
 Example: Recommending movies or songs on platforms like Netflix and Spotify.
 Impact: Enhances user experience through personalization.
e. Robotics
 Example: Teaching robots to perform complex tasks like assembling products.
 Impact: Automates manufacturing processes.

26
Fig 3.1 :- application of deep learning

3.2Use Cases

Domain Task Model Used


Healthcar Disease diagnosis CNNs, RNNs
e
Finance Fraud detection LSTMs,
Autoencoders
Retail Customer behavior prediction RNNs

Media Image generation GANs


Education Adaptive learning systems RNNs,
Transformers

3.3Future of Deep Learning


Deep learning continues to evolve, with trends like:
 Energy-Efficient Models: Designing smaller, faster models for edge devices.
 Explainable AI (XAI): Making deep learning decisions more interpretable.
 Integration with Quantum Computing: Solving highly complex problems faster.
 Neuromorphic Computing: Mimicking brain architecture to improve efficiency.

27
Chapter4

Conclusion

Deep learning represents a paradigm shift in AI, empowering machines to achieve human-level
performance in tasks previously thought impossible. Its ability to handle vast amounts of data and
solve complex problems makes it a cornerstone of modern AI. While challenges like computational
cost and interpretability remain, ongoing research and innovation continue to push the boundaries
of what deep learning can achieve.

Deep learning has become a transformative force in AI, driving advancements across various fields
and enabling machines to perform tasks with remarkable accuracy. Its ability to learn from vast
amounts of data and uncover hidden patterns makes it indispensable for solving complex problems.
Despite its challenges, ongoing research and technological progress continue to push the boundaries
of what deep learning can achieve, promising even more innovative applications and breakthroughs
in the future.

28
References

Neural Networks:
1. Introduction to Neural Networks - A comprehensive guide by IIT Patna.
2. Neural Network - Wikipedia - An overview of both biological and artificial neural
networks.
3. SpringerLink: Neural Network - Detailed information on artificial neural networks.

Deep Learning:
1. Deep Learning References - Inria - A curated list of references and resources.
2. Deep Learning: A Comprehensive Overview - A structured overview of deep learning
techniques and applications.
3. Deep Learning Book by Goodfellow, Bengio, and Courville - A comprehensive textbook
on deep learning

29

You might also like