Report Nnanddl
Report Nnanddl
CERTIFICATE
Certified that the Seminar on topic ………………………………………. has been successfully presented by ………,
bearing University Roll No. ………………...,in partial fulfillment of the requirements for the degree of
Bachelor of Engineering in CSE of Bikaner Technical University , Bikaner during academic year 2023-
2024. The Seminar report has been approved as it satisfies the academic requirements in respect of
Seminar work for the said degree.
DECLARATION
2
I, ………. , student of III/V/VII Semester B.Tech, in Computer Science and Engineering, Sobhasaria Group
of Institutions Sikar hereby declare that the Seminar entitled “ ” has been carried out by me
and submitted in partial fulfillment of the requirements for the IV year degree of Bachelor of Technology
in Computer Science and Engineering of Bikaner Technical University, Bikaner during academic year
2024-2025.
Date : .. name
3
AKNOWLEDGEMENT
This is opportunity to express my heartfelt words for the people who were part of this seminar in
numerous ways, people who gave me unending support right from beginning of the Seminar.
Mr…………………... for giving guidelines to make the seminar successful. Without their guidance and
persistent help this report would not have been possible. I must acknowledge the faculties and staffs
of Computer Science Engineering from Sobhasaria Group of Institutions, Sikar.
I extend my thanks to Mr. Dileep K Agarwal Head of the Department for his cooperation and
guidance.
I want to give sincere thanks to the principal, Dr. L. Solanki for his valuable support.
4
5
INTRODUCTION TO NEURAL NETWORKS AND DEEP LEARNING
Machine learning is a broad field in the field of AI (artificial intelligence) that focuses on
developing algorithms and models that can learn from data, identify patterns, and make
predictions or decisions. Based on this, neural networks are a subset of machine learning
inspired by biological neurons. They are algorithmic models consisting of artificial neurons
organized in layers. But traditional neural networks (such as multilayer perceptrons) have
limited ability to solve complex problems, which has led to the development of deep
learning. It follows that deep learning is a subfield of neural networks that focuses on deep
(multiple-layer) neural networks. One of the key ideas behind deep learning is the use of
multiple layers to extract more abstract features from data. Deep neural networks can
automatically learn representations of data at different levels of abstraction. This allows
them to effectively solve complex problems in areas such as computer vision, natural
language processing and others.
Fig:
Artificial Intelligence (AI): This is like a broad umbrella term for making machines smart. Think of
AI as teaching machines to do tasks that typically require human intelligence, like understanding
speech or playing chess.
Machine Learning (ML): This is a specific branch of AI. It's about teaching machines to learn from
data. Imagine showing a machine tons of pictures of cats until it learns to recognize a cat on its
own.
Deep Learning: A deeper dive into ML, where machines learn from large amounts of data using
structures called neural networks. It's like teaching a machine to think through many layers,
similar to how our brain works.
6
Neural Network: This is the backbone of deep learning. It's a system of algorithms that mimics
the human brain’s neurons. Each "neuron" processes a little piece of information, and together,
they solve complex tasks.
Chapter 1
NEURAL NETWORKS
2. Layers:
- Input Layer: Receives the initial data and passes it to the hidden layers.
7
- Hidden Layers: Consists of one or more layers that transform the input data through a
series of computations.
- Output Layer: Produces the final output of the network, typically representing the
predicted values or classificationns.
3. Activation Functions:
- Functions applied to the output of each neuron to introduce non-linearity, enabling the
network to learn complex patterns. Common activation functions include Sigmoid, Tanh, and
ReLU (Rectified Linear Unit).
4. Bias:
- Added to the weighted sum of inputs to each neuron to shift the activation function,
improving the network's flexibility and accuracy.
8
Chapter 2
Types of Neural Networks
Description:
The simplest type of neural network where data flows in one direction—from the input layer
to the output layer.
Example:
Predicting house prices based on features like size, location, and age.
Advantages:
Simple to design and implement.
Effective for tasks with structured data.
Disadvantages:
Cannot handle sequential data or memory-based tasks.
Limited capacity for complex tasks.
Applications:
Regression tasks.
Basic classification (e.g., email spam detection).
Usage:
Predicting continuous values or binary/multi-class classification.
2. Convolutional Neural Network (CNN)
Description:
Specialized for processing grid-like data such as images by using convolutional layers.
Example:
Image classification, such as recognizing cats and dogs in photos.
Advantages:
Excellent at feature extraction from images.
9
Reduces the number of parameters using pooling layers.
Disadvantages:
Computationally intensive.
Requires a large amount of labeled data.
Applications:
Facial recognition.
Medical image analysis.
Usage:
Computer vision tasks like object detection and image segmentation.
10
Example:
Predicting weather conditions based on historical data.
Advantages:
Solves the vanishing gradient problem.
Handles long sequences effectively.
Disadvantages:
Computationally heavy.
Slower to train.
Applications:
Text summarization.
Video analysis.
Usage:
Sequential data analysis with long-term dependencies.
11
Usage:
Natural Language Processing (NLP) tasks.
6. Autoencoder
Description:
Unsupervised neural network used for dimensionality reduction or feature extraction.
Example:
Reducing the dimensionality of customer data for clustering.
Advantages:
Reduces noise in data.
Learns efficient data representations.
Disadvantages:
Sensitive to input data quality.
Limited interpretability.
Applications:
Image compression.
Anomaly detection.
Usage:
Data preprocessing and denoising.
12
Training can be unstable.
Requires careful tuning.
Applications:
Creating synthetic images or videos.
Enhancing image resolution.
Usage:
Art creation and data augmentation.
13
Clustering customer segments in marketing.
Advantages:
Reduces dimensionality effectively.
Easy to visualize clusters.
Disadvantages:
Limited to clustering tasks.
Requires careful tuning of parameters.
Applications:
Market research.
Feature selection.
Usage:
Understanding high-dimensional data.
14
Fig 2.1 :- types of nural network
15
Chapter 3
3.1How Neural Networks Work
Forward Propagation:
- Input data is passed through the network's layers. Each neuron processes its inputs using
a weighted sum and an activation function, producing an output that becomes the input for
the next layer.
Weight Initialization:
- Weights are initialized randomly or using specific methods (like He or Xavier initialization)
to break symmetry and accelerate learning.
Learning Rate:
- A crucial hyperparameter that determines the step size for weight updates. Too high can
cause instability, while too low can slow convergence.
Optimization Algorithms:
16
- Algorithms like Stochastic Gradient Descent (SGD), Adam, and RMSprop are used to
update the weights and biases to minimize the error.
Image Recognition:
- Used in facial recognition, object detection, and medical imaging to identify patterns and
features within images.
Speech Recognition:
- Converts spoken language into text, enabling voice-activated systems and transcription
services.
Autonomous Vehicles:
- Helps in object detection, path planning, and decision-making processes for self-driving
cars.
17
Chapter 4
4.1Advantages of Neural Networks
Learning Capability:
- Can learn and model complex, non-linear relationships within data.
Adaptability:
- Adjust to changes in the input pattern, making them versatile across various applications.
Data Versatility:
- Effective with various data types, including images, audio, and text.
Parallel Processing:
- Capable of handling multiple inputs simultaneously, making them efficient for large-scale
data.
4.2Challenges and Limitations
18
Data Dependency:
- Requires large datasets for effective training, which can be expensive and time-
consuming to gather.
Computational Requirements:
- High computational cost and energy consumption, necessitating powerful hardware like
GPUs.
Interpretability:
- Often considered a "black box" due to the difficulty in understanding and interpreting the
decision-making process.
Overfitting
- Risk of the network being too tailored to the training data, resulting in poor performance
on unseen data.
Hyperparameter Tuning:
- Selecting the optimal architecture and hyperparameters (such as learning rate, number of
layers, and activation functions) often requires expertise and extensive experimentation.
4.4.2Convolutional Layers:
- Feature extraction is performed using convolutional layers, which apply filters to detect
edges, textures, and other image features.
4.4.5Output:
- The network produces a probability distribution over possible classes, indicating the likelihood of
the input image belonging to each class (e.g., dog, cat, car).
19
Chapter 5
Conclusion
20
Chapter 1
Deep Learning
Deep learning is a subset of machine learning, which itself is a subset of artificial intelligence
(AI). It is inspired by the structure and function of the human brain, using artificial neural
networks (ANNs) with multiple layers (hence "deep") to model and solve complex problems.
Deep learning has revolutionized fields like computer vision, natural language processing,
robotics, and healthcare by enabling computers to learn from vast amounts of data .
Deep learning involves training neural networks with many layers, allowing the system to
automatically discover intricate patterns and representations in data. The "deep" in deep
learning refers to the depth of these layers, which can number in the hundreds or
thousands.
Unlike traditional machine learning, where feature engineering (manual extraction of
relevant features) is often necessary, deep learning models learn features
automatically from raw data.
a) Neural Networks:
o Deep learning models are built upon artificial neural networks, which consist of layers of
interconnected nodes (neurons).
o Each layer transforms the input data in a non-linear way, allowing the network to learn
complex representations.
b) Layers:
o Input Layer: The initial layer that receives the raw data.
21
o Hidden Layers: Multiple layers between input and output that perform various
transformations. The depth (number of hidden layers) is what makes the network
"deep."
o Output Layer: The final layer that provides the prediction or classification.
c) Activation Functions:
o Functions that introduce non-linearity to the model, enabling it to learn intricate
patterns. Common activation functions include Sigmoid, Tanh, and ReLU (Rectified Linear
Unit).
d) Weights and Biases:
o Parameters that are adjusted during training to minimize the error in predictions.
Weights determine the strength of connections between neurons, while biases adjust
the output along with the weighted sum of inputs.
e) Backpropagation:
o A key algorithm used to train deep learning models. It involves propagating the error
from the output layer back through the network to update weights and biases,
minimizing the overall error.
a) Forward Pass:
o Input data passes through the network layer by layer. Each neuron applies a weighted
sum to its inputs, adds a bias, and passes the result through an activation function.
b) Loss Function:
o Measures the difference between the predicted output and the actual target. Common
loss functions include Mean Squared Error (MSE) for regression tasks and Cross-Entropy
Loss for classification tasks.
c) Backpropagation:
o After the forward pass, the loss is computed and backpropagated through the network.
Gradients of the loss with respect to each weight are calculated, and weights are
updated using an optimization algorithm such as Stochastic Gradient Descent (SGD) or
Adam.
d) Training:
22
o The model is trained over multiple iterations (epochs), where each epoch consists of a
forward pass and backpropagation. The process continues until the model converges,
meaning the loss no longer significantly decreases.
23
g) Transformer Networks:
o Use self-attention mechanisms to handle dependencies and relationships in data
sequences. Highly effective for natural language processing tasks.
o Example: Machine translation and text summarization.
Chapter 2
a. High Accuracy
Deep learning models achieve state-of-the-art performance in many tasks, outperforming
traditional machine learning methods.
b. Automatic Feature Extraction
Reduces the need for manual feature engineering, as models learn features directly from
data.
c. Handles Complex Problems
Capable of modeling non-linear relationships and solving problems involving unstructured
data like images, audio, and text.
d. Scalability
Can process large-scale data effectively.
a) Data Requirements:
o Deep learning models require large amounts of labeled data for effective training, which
can be difficult and expensive to obtain.
b) Computational Resources:
o Training deep networks requires significant computational power, often relying on
specialized hardware such as GPUs and TPUs.
24
c) Interpretability:
o Deep learning models are often considered "black boxes" because their decision-making
processes are not easily interpretable.
d) Overfitting:
o Can overfit to the training data, leading to poor generalization on unseen data.
Techniques like dropout and regularization are used to mitigate this.
e) Hyperparameter Tuning:
o Finding the optimal set of hyperparameters (e.g., learning rate, number of layers)
requires extensive experimentation and expertise.
a. Representation Learning
Deep learning excels at hierarchical representation learning, meaning it learns simple
patterns in early layers and increasingly complex patterns in deeper layers.
b. End-to-End Learning
Unlike traditional approaches, where intermediate steps like feature extraction are manually
performed, deep learning models can directly learn from raw inputs to outputs.
c. Scalability
Deep learning models perform better as the size of the dataset increases, making them ideal
for big data applications.
25
Task: Translate English sentences into French.
Model: Transformer network (e.g., Google’s BERT).
Process:
1. Tokenize the input text.
2. Use attention mechanisms to focus on relevant words.
3. Output translated sentences.
Chapter 3
3.1 Applications of Deep Learning
a. Healthcare
Example: Diagnosing diseases from medical imaging (e.g., X-rays, MRIs).
Impact: Improves accuracy and early diagnosis.
b. Autonomous Vehicles
Example: Detecting objects like pedestrians and traffic signs.
Impact: Enables safe navigation.
c. Natural Language Processing (NLP)
Example: Sentiment analysis, chatbots, and virtual assistants.
Impact: Improves communication between humans and machines.
d. Entertainment
Example: Recommending movies or songs on platforms like Netflix and Spotify.
Impact: Enhances user experience through personalization.
e. Robotics
Example: Teaching robots to perform complex tasks like assembling products.
Impact: Automates manufacturing processes.
26
Fig 3.1 :- application of deep learning
3.2Use Cases
27
Chapter4
Conclusion
Deep learning represents a paradigm shift in AI, empowering machines to achieve human-level
performance in tasks previously thought impossible. Its ability to handle vast amounts of data and
solve complex problems makes it a cornerstone of modern AI. While challenges like computational
cost and interpretability remain, ongoing research and innovation continue to push the boundaries
of what deep learning can achieve.
Deep learning has become a transformative force in AI, driving advancements across various fields
and enabling machines to perform tasks with remarkable accuracy. Its ability to learn from vast
amounts of data and uncover hidden patterns makes it indispensable for solving complex problems.
Despite its challenges, ongoing research and technological progress continue to push the boundaries
of what deep learning can achieve, promising even more innovative applications and breakthroughs
in the future.
28
References
Neural Networks:
1. Introduction to Neural Networks - A comprehensive guide by IIT Patna.
2. Neural Network - Wikipedia - An overview of both biological and artificial neural
networks.
3. SpringerLink: Neural Network - Detailed information on artificial neural networks.
Deep Learning:
1. Deep Learning References - Inria - A curated list of references and resources.
2. Deep Learning: A Comprehensive Overview - A structured overview of deep learning
techniques and applications.
3. Deep Learning Book by Goodfellow, Bengio, and Courville - A comprehensive textbook
on deep learning
29