0% found this document useful (0 votes)
39 views56 pages

Internship

Aiml intership

Uploaded by

kk3943823
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views56 pages

Internship

Aiml intership

Uploaded by

kk3943823
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

INDEX

CHAPTER 1: EXECUTIVE SUMMARY


CHAPTER 2: OVERVIEW OF THE ORGANIZATION
CHAPTER 3: PART
CHAPTER 4: WEEKLY ACTIVITY LOG
• ACTIVITY LOG FOR THE FIRST WEEK
• ACTIVITY LOG FOR THE SECOND WEEK
• ACTIVITY LOG FOR THE THIRD WEEK
• ACTIVITY LOG FOR THE FOURTH WEEK
• ACTIVITY LOG FOR THE FIFTH WEEK
• ACTIVITY LOG FOR THE SIXTH WEEK
• ACTIVITY LOG THE SEVENTH WEEK
• ACTIVITY LOG FOR THE EIGHTH WEEK

CHAPTER 5: OUTCOMES DESCRIPTION


CHAPTER 6: FUTURE SCOPE

1
CHAPTER 1: EXECUTIVE SUMMARY
• Internships are generally thought of to be reserved for college students
looking to gain experience in a particular field. However, a wide array of
people can benefit from training Internships in order to receive real
world experience and develop their skills.
• Utilizing internships is a great way to build your resume and develop
skills that can be emphasized in your resume for future jobs.
• In the internship period the learning objectives are to get well trained
in the topics, apply them practically, Development, Research,
Teamwork, Leader qualities and technical exposure.
• The achieved outcomes are same as the leasing objectives whereas team
work qualities, technical skills, Application development and research on
the topic has been achieved.
The organization has been into software development and the same has been
trained to the students with the application development using web
technologies like Artificial Intelligence, Machine Learning, interns has
undergone the training for 1 month and has built the web application for 1
month.

2
CHAPTER 2: OVERVIEW OF THE ORGANIZATION
Vision
Our partners feel proud when their tech support is used in innovative
applications and in the betterment of people round the corner.

Mission
Creating values to our partners and clients is our motto. They believe in
transforming lives through easy and affordable solutions.

Why Brainovision?
Brainovision follows the mantra of innovation through passion that is apparent
in our approach towards problem resolution, skill set acquisition, team
building, project deliverables and different client verticals.

Quality of Work - The work they do for their clients is excellent in their
terms of quality Provided they are scalable, secure, reliable, optimized and
remarkable in performance.

Technical Expertise-They carry out conventional knowledge in the


technical aspects. This helps them chatter then client's needs with superior
results.

Service and support-In a world run by the internet, they are just a click
away. As for the company, client's satisfaction is the number one priority they
always concentrate on.

3
CHAPTER 3: INTERNSHIP PART
Description of the Activities Responsibilities in the intern Organization during internship
which shall include - details of working conditions, weekly work schedule, equipment used
and tasks performed. This part could end by reflecting on what kind of skills the intern
acquired.
Internship program Involves training and project phases. Once we got enrolled into the
internship program we will be trained on AI & ML. Training sessions are held for knowledge
transfer. Regular training period of 3 weeks will be taking place. During the training period
we will be trained on the certain topics and technologies which are required for the
application development. We are given tasks and activities on a daily basis to achieve. The
hosting and domain of a project is given to each hatch to build a project of web application
Intern have to work on the project given along with the assigned team members. Every team
has to build a web application on a sustainable development gool and subunit to the
organization. Team will be having deadline targets for every week. During the Internship
period we used laptops/ Desktops provided by the organization and College.
This report is a description of my & weeks internship carried out as a compulsory
component of the course. In the following chapter details of tools and technology used and
an overview is given. Afterwards, I explain my role and tasks as a trainee and give specific
technical details about my mam tasks. Finally, a conclusion is drawn from the experience.

To gain skills and knowledge


This internship provided us with essential skills and knowledge one requires in the field of
web designing. The crucial tools used during the tenure helped us in gaining knowledge
about programming languages

To get field work experience:


By taking this training we enhanced our knowledge in artificial intelligence, machine
learning, deep learning and natural language processing.

To enhance our communication skills:


By interacting with my trainee and classmates I got to leant a lot. It helped me to enhance
my communication skills and represent my work with confidence. It boosted my confidence
to design more web pages and create some great designs just for fun.

To link theory with practice:


First we learned the theory aspect and then we put that into practice by doing the practical.

4
CHAPTER 4: WEEKLY ACTIVITY LOG
WEEK 1: ACTIVITY LOG FOR THE FIRST WEEK
Day 1: AI introduction
Day 2: ML introduction
Day 3: Basic concepts of AI & ML
Day 4: Connection between AI & ML
Day 5: Differences between AI & ML
Day 6: Benefits and applications

DETAILED REPORT
AI introduction
The first week is more about learning and understanding the basics
of artificial intelligence and machine learning. I will be listing down
the topics learnt, practiced and worked on.
Artificial intelligence (AI) is a field of study that involves creating
machines that can learn, reason, and act in ways that would normally
require human intelligence. AI is a broad field that draws from many
disciplines, including computer science, statistics, and neuroscience.
AI technologies are based on machine learning and deep learning,
and are used for a variety of tasks, including:
 Data analytics
 Predictions and forecasting
 Object categorization
 Natural language processing
 Recommendations

5
 Intelligent data retrieval
AI systems learn from data, process it, and improve over time. They
can use a variety of techniques, including rule-based systems, neural
networks, and machine learning models.
Some say that much of the technology used today is actually
advanced machine learning, and that true artificial intelligence, or
"general artificial intelligence" (GAI), is still to come. Others say that
AI systems should be designed to be transparent, so that users can
understand how decisions are made.

ML introduction
Machine learning (ML) is a branch of artificial intelligence (AI) that
allows computers to learn from data and perform tasks without being
explicitly programmed:
 How it works
ML algorithms are trained to analyse data, identify patterns, and
make predictions. For example, a computer can be trained to
recognize cats by feeding it thousands of images of cats and allowing
the algorithm to learn the patterns that define a cat.
 How it's used
ML has many applications, including self-driving cars, speech and
image recognition, web search, fraud detection, and understanding
the human genome.
 How it's different from traditional programming
In traditional programming, a computer follows a set of predefined
instructions to perform a task. In ML, the computer is given a task
and examples of how to accomplish it, and it figures out how to do it
on its own.
 Bias

6
Uncaught biases in ML can perpetuate systemic issues. To combat
bias, it's important to be aware of biases, work towards eliminating
them, and ensure that diverse people are involved in the project.

Basic concepts of AI & ML


When it comes to Artificial Intelligence (AI) and Machine Learning
(ML), there are a lot of different terms and buzzwords floating
around. It can be difficult to keep track of what everything means,
especially when some of the terms seem to be used interchangeably.
However, it’s important to have at least a basic understanding of the
glossary of AI and machine learning terms. This is because these
technologies are becoming increasingly important, both in terms of
business and society at large. For businesses, AI and machine
learning can be used to automate tasks, improve efficiency and
productivity, and make better decisions. For society, they are being
used to develop driverless cars, improve healthcare, and fight crime.
The glossary below is by no means exhaustive and it might be
simplistic at times, so take it as a starting point.
The article delves into:
1. Artificial Intelligence and Machine Learning.
2. Supervised Learning, Unsupervised Learning, and
Reinforcement Learning.
3. Data Science and Artificial Intelligence.
4. Data Scientist and Data Analyst.
5. Big Data and Data Science.
6. Data Engineer and Data Scientist.
7. Deep Learning and Neural Networks.
8. Forecasting, Natural Language Processing, and Computer
Vision.
7
Connection between AI & ML
The computer industry has brought many new things. It created a lot
of different technologies that changed our world. The current hype in
the industry is on machine learning, artificial intelligence, and data
science. Even though all of these things might be separate branches
on their own, they are deeply connected.
Even though there are differences between AI and ML, they both
have a similar outcome – intelligent software that can handle more
complex tasks. That’s why these technologies have become so
popular in the past couple of years.
They are useful, and individuals and companies are both looking for
computer programs that can take over human tasks while finishing
tasks more quickly. However, they don’t do this in the same way.
Even though both ML and AI programs might be able to do similar
tasks, their design and approach are different.
Each of these branches has created highly sophisticated programs
that can do complex tasks in only a second. However, they use
different structures, coding, and approach to do this, which puts both
AI and ML at a similar place in terms of their power.
We can freely say that Machine Learning is a subfield or type of AI.
All Machine Learning programs can be looked at as part of Artificial
Intelligence, but not all Artificial Intelligence is Machine Learning.
They are related inherently, but this doesn’t mean that they are
synonymous. AI is the parent of ML, and both of them are the sons of
computer science. Simply put, everything in the field of Machine
Learning fits under the Artificial Intelligence umbrella.
Machine Learning can be used for AI applications, and it can make
Artificial Intelligence better. However, it doesn’t mean that it’s always
used. Simply put, the ability of machine learning to come up with

8
answers on its own without additional programming is very
convenient.
Even though they have a different way of doing it, both Artificial
Intelligence and Machine Learning programs have the ability to learn.
They aren’t static and change over time, becoming better at what
they are designed to do. In some cases, they might even expand their
capabilities to do other tasks.
AI is usually approached by writing programs in which rules are
implemented to allow the program to handle a certain number of
tasks. These rules are re-written, changed, and upgraded throughout
the development and testing of the program. Programmers learn
from the program’s behaviour and improve it with additional coding.
On the other hand, machine learning learns on its own. The program
is written with a predictive model where it collects data samples
within a database. Machine Learning makes many mistakes over
time, but with each mistake, it learns how to realize the required
patterns.
After some time has passed, the program increases accuracy and
becomes better at the task that it’s designed to do. That’s why we
are seeing new and improved software solutions coming out each
day with new capabilities.
One of the most important things that are often overlooked with AI
and ML is that they both use data. It plays a crucial role in these
computer programs. The best example of this is data science. It’s the
practice of getting valuable conclusions from data and making
important predictions.
These predictions are made with Machine Learning, so it’s a sort of
middleman between Artificial Intelligence and data science. In some
cases, AI is used to extract and understand data, while Machine
Learning helps get conclusions and process this data.

9
After all, ML requires structured data so that it can be used in the
desired way. A good example is self-driving vehicles. Sensors are used
to collect images, and they are then processed through ML to make
the right decisions quickly. Without accurate data, neither of them
would be able to work.
They need to be fed with data that is rich in context, relevant, and
complete to establish a common language.

Differences between AI & ML


S.No. ARTIFICIAL INTELLIGENCE MACHINE LEARNING

The terminology “Machine


The terminology “Artificial
Learning” was first used in 1952
Intelligence” was originally
by IBM computer scientist
1. used by John McCarthy in
Arthur Samuel, a pioneer in
1956, who also hosted the
artificial intelligence and
first AI conference.
computer games.

AI stands for Artificial


intelligence, ML stands for Machine Learning
where intelligence is which is defined as
2.
defined as the ability to the acquisition of knowledge or
acquire and apply skill
knowledge.

AI is the broader family


Machine Learning is the subset
3. consisting of ML and DL as
of Artificial Intelligence.
its components.

10
S.No. ARTIFICIAL INTELLIGENCE MACHINE LEARNING

The aim is to increase the The aim is to increase accuracy,


4. chance of success and not but it does not care about; the
accuracy. success

AI is aiming to develop an
intelligent system capable Machine learning is attempting
of to construct machines that can
5.
performing a variety of only accomplish the jobs for
complex jobs. decision- which they have been trained.
making

It works as a computer Here, the tasks systems


6. program that does smart machine takes data and learns
work. from data.

The goal is to simulate The goal is to learn from data


7. natural intelligence to on certain tasks to maximize
solve complex problems. the performance on that task.

AI has a very broad variety The scope of machine learning


8.
of applications. is constrained.

ML allows systems to learn new


9. AI is decision-making.
things from data.

11
Benefits
Improved customer experience:
There might not be anyone who benefits more from AI technology
than customers. Eliminating the lag between customer needs and
business responses has become possible with automated chatbots,
triggered emails, and other personalized messaging systems. Using
deep learning and NPL, it’s never been easier to provide timely,
tailored experiences for customers. Additionally, it takes the strain off
your customer support teams—increasing efficiencies while
eliminating manual workflows.
Reducing errors:
Once the foundation of your AI and automation models are
established, you’ll notice manual errors starting to disappear.
Remedial tasks like data processing or onboarding become
background processes—not because they’re no longer important,
because there’s no longer a need for thorough oversight. Small errors
simply disappear because the machine only understands accuracy.
Automation:
You can’t talk about the speed that comes with AI and machine
learning without mentioning automation. The most common output
of AI, there’s not a business process that automation can’t positively
impact. From communications and marketing to internal onboarding
and support, the technology feature can remove inefficiencies from
every corner of your business. For example, automation in
sales boosts productivity within the department by 14.5%—while
bringing down marketing costs by 12.2%.[iv]
Additionally, taking manual workflows out of your organization frees
up resources for ideas and projects that were seemingly unavailable.
With automation, businesses replace the minutia of small tasks with
the freedom to think strategically about the bigger picture.
12
Decision making:
The goal of AI has always been to generate smarter decision making.
It’s not that we’re not able to think critically as humans, we’re just
limited in how quickly we can process and coordinate mountains of
data. AI takes the job of delivering data, analyzing trends, and
forecasting results—while taking the human emotion out of it. It’s
able to take raw data and translate it into an objective decision.
Tackling complex problems:
Introducing deep learning and machine learning into business
strategy allows you to take on more complex problems. These
technologies make it possible to not only find solutions, but at scale.
From problems with customer support operations to cybersecurity
threats—implementing AI into your solution gives you a foundational
approach that saves time, money, and resources.
Increase operational efficiencies:
Between automating your most repetitive tasks and expanding your
operations with AI, your business will see an immediate increase in
efficiencies.

Applications
here are some applications of artificial intelligence
1. Speech recognition: AI algorithms are used in many speech
recognition devices like Alexa to understand human language,
convert them into commands, do the research and give
appropriate results.
2. Navigation: AI algorithms are used in navigation applications
like Google maps. They (AI algorithms) are used to reach a
destination in minimum time, look at real-time traffic data, and
more.

13
3. Chatbots: Nowadays, companies use chatbots as a lead
generation tool. The idea of a chatbot is to respond to user
queries with minimal human intervention. It helps improve the
user experience by providing visitors guidance and support
needed to navigate a website. With new techniques like Natural
Language Processing (NLP) and AI, machines can also react to
advanced queries, which involves looking at a user’s history and
profile before attending the query.
4. Autonomous cars: Autonomous cars are the future. These
automated cars use AI and ML algorithms to sense the
environment and operate without human drivers.
Meanwhile, here’s the application of artificial intelligence across
different lines of business.
 AI in e-commerce can provide a personalised shopping
experience, assistance, and fraud prevention.
 AI in education can help automate administrative tasks, digitise
content like video lectures, conferences, and textbooks, and
provide personalised learning.
 AI in human resources helps scan the job profile and resumes of
candidates.
 AI in healthcare can help build modern and sophisticated
machines to detect diseases. It can also be used to maintain the
medical data of patients.
Moving on, here are the applications of machine learning
1. Image recognition: Image recognition is the most common
application of machine learning. Henceforth, ML algorithms are
used to read and understand images and identify objects,
persons, places, and digital images. However, the most popular
use of image recognition is the automatic friend tagging
suggestion on social media.
14
2. Automatic language translation: Most people would have used
automatic language translating apps to convert texts into their
known language. Therefore, this process is called automatic
translation, which is possible due to machine learning systems
like Neural Machine Learning, which translates texts into known
languages.
3. Product recommendation: Machine learning is widely used
across e-commerce companies to provide useful product
recommendations.
4. Email spam and malware filtering: Machine learning helps in
automatically filtering emails into different folders like
significant, usual or spam.

15
WEEK 2: ACTIVITY LOG FOR THE SECOND WEEK
Day 1: Introduction to Deep Learning
Day 2: Python programming
Day 3: python programming
Day 4: Mathematics: calculus and vector algebra
Day 5: Machine Learning: Linear Regression
Day 6: Machine Learning: Logistic Regression

DETAILED REPORT
Introduction to Deep Learning
The definition of Deep learning is that it is the branch of machine
learning that is based on artificial neural network architecture. An
artificial neural network or ANN uses layers of interconnected nodes
called neurons that work together to process and learn from the
input data.
In a fully connected Deep neural network, there is an input layer and
one or more hidden layers connected one after the other. Each
neuron receives input from the previous layer neurons or the input
layer. The output of one neuron becomes the input to other neurons
in the next layer of the network, and this process continues until the
final layer produces the output of the network. The layers of the
neural network transform the input data through a series of
nonlinear transformations, allowing the network to learn complex
representations of the input data.
Deep learning AI can be used for supervised, unsupervised as well as
reinforcement machine learning. it uses a variety of ways to process
these.

16
 Supervised Machine Learning: supervised machine learning is
the machine learning technique in which the neural network
learns to make predictions or classify data based on the label
datasets. Here we input both input features along with the
target variables. the neural network learns to make predictions
based on the cost or error that comes from the difference
between the predicted and the actual target, this process is
known as backpropagation. Deep learning algorithms like
Convolutional neural networks, Recurrent neural networks are
used for many supervised tasks like image classifications and
sentiment analysis, language translations, etc.
 Unsupervised Machine Learning: unsupervised machine
learning is the machine learning technique in which the neural
network learns to discover the patterns or to cluster the dataset
based on un label datasets. Here there are no target variables.
while the machine has to self-determined the hidden patterns
or relationships within the datasets. Deep learning algorithms
like autoencoders and generative models are used for
unsupervised tasks like clustering, dimensionality reduction,
and anomaly detection.
 Reinforcement Machine Learning: Reinforcement machine
learning is the machine learning technique in which an agent
learns to make decisions in an environment to maximize a
reward signal. The agent interacts with the environment by
taking action and observing the resulting rewards. Deep
learning can be used to learn policies, or a set of actions, that
maximizes the cumulative reward over time. Deep
reinforcement learning algorithms like Deep Q networks and
Deep Deterministic Policy Gradient (DDPG) are used to
reinforce tasks like robotics and game playing etc.

17
Python programming
Python is a popular programming language for artificial intelligence
(AI) and machine learning (ML) because it's easy to learn and use, has
a large library ecosystem, and is platform independent:
 Simplicity
Python is designed to be easy to understand and write, allowing
developers to focus on problem-solving.
 Libraries
Python has many standard libraries that cover a variety of tasks,
including data visualization and numerical aspects.
 Interoperability
Python can communicate with other languages like C and C++, which
allows it to use optimized code for computationally intensive tasks.
 Platform independence
Python can run on many platforms, including Windows, MacOS,
Linux, and Unix. This allows developers to collaborate across
platforms and move code between them easily.
 Readability
Python is easy to read, which makes it easier for developers to
understand each other's code and share it.
Python is the simplest language of all the programming languages,
and in reality, is one-fifth when compared with other OOP languages.
This is why it is currently among the most well-known languages in
the marketplace.
o Python comes with Prebuilt Libraries such as numpy to perform
scientific calculations, Scipy for advanced computing, and

18
Python brain for machine learning (Python Machine Learning),
making it among the top languages for AI.
o Python developers all over the globe offer extensive support
and assistance through tutorials and forums, helping the
programmer much easier than another popular language.
o Python is platform-independent and therefore is among the
most adaptable and well-known options for various platforms
and technologies, with minimal modifications to the basics of
coding.
o Python has the greatest flexibility among other programs, with
the option of choosing among OOPs method and scripting.
Additionally, you can use the IDE to search for all codes and be
a blessing to developers struggling with different algorithms.
Let's look at the reason Python is used in Machine Learning and the
various libraries it provides for this reason.
o Python Brain - is an easy yet flexible algorithm to perform
Machine Learning tasks. It also functions as an extensible
Machine Learning Library for Python that provides a range of
predefined environments for testing and evaluating algorithms.
o Python ML - A bidirectional framework developed in Python
concentrates on SVMs and other kernel-based approaches. It's
accessible on Linux and Mac OS X.
o Scikit-learn - can be described as an effective instrument for
data analysis making use of Python. This is a completely free
and open-source library. It is the most widely used general-
purpose machine learning library.
o MDP - Toolkit A different Python Data Processing Framework
that is easily extended contains a variety of unsupervised and
supervised learning algorithms and other computing units for
data analysis that could be combined to create sequences of
19
data processing and more intricate feed-forward networks.
Implementation of the new algorithm is straightforward. The
number of algorithms available is constantly growing.

Mathematics: calculus and vector algebra


Calculus and linear algebra are mathematical disciplines that are
essential for machine learning (ML) and artificial intelligence (AI):
 Linear algebra
The study of vectors and matrices, which are used to represent data
in ML. Linear algebra is fundamental for ML because it allows for
operations like matrix multiplications and decompositions. Some
important concepts in linear algebra for ML include vectors, matrix
transpose and inverse, determinants, dot product, eigenvalues and
eigenvectors, and matrix factorization.
 Calculus
The study of change, which is used to analyse data over
time. Calculus is important for understanding how ML algorithms
work, such as the gradient descent algorithm. It's also used to
formulate the functions that train algorithms. Multivariable calculus
is important for building suitable models because ML models are
trained with datasets that have multiple feature variables.
Other mathematical disciplines that are important for AI and ML
include probability theory and optimization.

Machine Learning: Linear Regression


Linear regression is a type of supervised machine learning algorithm
that computes the linear relationship between the dependent
variable and one or more independent features by fitting a linear
equation to observed data.

20
When there is only one independent feature, it is known as simple
linear regression, and when there are more than one feature, it is
known as multiple linear regression.
Simple Linear Regression
This is the simplest form of linear regression, and it involves only one
independent variable and one dependent variable. The equation for
simple linear regression is:
y=β0+β1Xy=β0+β1X
where:
 Y is the dependent variable
 X is the independent variable
 β0 is the intercept
 β1 is the slope
Multiple Linear Regression
This involves more than one independent variable and one
dependent variable. The equation for multiple linear regression is:
y=β0+β1X1+β2X2+………βnXny=β0+β1X1+β2X2+………βnXn
where:
 Y is the dependent variable
 X1, X2, …, Xn are the independent variables
 β0 is the intercept
 β1, β2, …, βn are the slopes

Machine Learning: Logistic Regression


Logistic regression is used for binary classification where we
use sigmoid function, that takes input as independent variables and
produces a probability value between 0 and 1.

21
 The sigmoid function is a mathematical function used to map
the predicted values to probabilities.
 It maps any real value into another value within a range of 0
and 1. The value of the logistic regression must be between 0
and 1, which cannot go beyond this limit, so it forms a curve
like the “S” form.
 The S-form curve is called the Sigmoid function or the logistic
function.
 In logistic regression, we use the concept of the threshold
value, which defines the probability of either 0 or 1. Such as
values above the threshold value tends to 1, and a value below
the threshold values tends to 0.

22
WEEK 3: ACTIVITY LOG FOR THE THIRD WEEK
Day 1: Machine Learning (segmentation)
Day 2: Machine Learning (segmentation)
Day 3: Machine Learning (Bagging)
Day 4: Machine learning (Boosting)
Day 5: Intro to neural networks
Day 6: Neural networks

DETAILED REPORT
Machine Learning (segmentation)
Segmentation, the technique of splitting customers into separate
groups depending on their attributes or behaviour, makes this
possible.
 Customer segmentation in machine learning can help you save
money on marketing initiatives by reducing waste. You’ll be
better able to target your campaigns to the proper people if
you know which consumers are similar to each other.
Machine learning algorithms come in a variety of flavors, each
tailored to a particular task. K-means clustering is one of the
techniques that are useful for customer segmentation.
Unsupervised algorithms lack a labeled data against which to
evaluate their performance.
 The basic concept underlying k-means is to group data into
clusters that are more similar
The difference between the consumers’ age, income, and spending
scores are used to calculate the similarity between clusters in this
scenario.

23
You indicate the number of clusters you want to divide your data into
while training a k-means model. The model begins with centroids
that are randomly placed variables that determine the centre of each
cluster.
The model examines the training data and assigns them to the cluster
with the closest centroid. After all of the training, examples have
been categorized, the centroids’ parameters are re-adjusted to put
them in the middle of their clusters.
Additionally to k-means clustering, the elbow approach is an effective
data segmentation technique for determining the ideal number of
machine learning segmentation clusters.
By evaluating their distance from each of the cluster centroids, your
machine learning model can decide which segment new clients
belong to once it has been trained. There are numerous applications
for this.
 Your machine learning model will assist you in determining
the segment of your client and the most prevalent products
linked with that segment
Machine learning algorithms for customer segmentation will assist
you in fine-tuning your product marketing strategies. For example,
you may begin an ad campaign with a random sample of clients from
various segments. After a period, you may look at which groups are
more active and tailor your strategy to only show advertising to those
who belong to those parts.
You could also run many versions of your campaign and use machine
learning to segment your clients depending on their answers to the
various efforts. You’ll have a lot more tools to test and fine-tune your
ad campaigns in general.
 K-means clustering is a machine learning segmentation
method that is quick and efficient

24
However, it isn’t a magic wand that will change your data into logical
client categories in an instant. You must first choose the target
audience for your marketing efforts and the elements that will be
important to them. If your advertisements will be focused on specific
locations, for example, geographic location will be irrelevant, and
you’ll be better off filtering your data for that region.
Similarly, if you’re pushing a men’s health product, you should filter
your customer data to solely include guys and avoid using gender as
one of your machine learning model’s attributes.
In other circumstances, you’ll want to offer additional information,
such as things they’ve already purchased. In this situation, you’ll
need to make a customer-product matrix, which is a table with
customers as rows and things as columns, as well as the number of
things purchased at each customer and item’s intersection.
If there are too many products, consider establishing an embedding,
in which the products are represented as values in a
multidimensional vector space.

Machine Learning (Bagging)


Bagging, or bootstrap aggregation, is a machine learning technique
that improves the accuracy and reliability of predictive models:
 How it works
Bagging randomly selects data points from a training set with
replacement to create multiple subsets. These subsets are then used
to train multiple models, such as decision trees or neural
networks. During prediction, the outputs of these models are
aggregated to produce a final prediction.
 Benefits

25
Bagging can help reduce overfitting, improve stability, and increase
accuracy. It's particularly effective when the individual models have a
lot of variation.
 When to use
Bagging is commonly used with decision tree methods, but it can be
used with any type of method. It's useful for both regression and
statistical classification.

Machine Learning (Boosting)


Boosting is a method used in machine learning to reduce errors in
predictive data analysis. Data scientists train machine learning
software, called machine learning models, on label data to make
guesses about un label data. A single machine learning model might
make prediction errors depending on the accuracy of the training
dataset. For example, if a cat-identifying model has been trained only
on images of white cats, it may occasionally misidentify a black cat.
Boosting tries to overcome this issue by training multiple models
sequentially to improve the accuracy of the overall system.
To understand how boosting works, let's describe how machine
learning models make decisions. Although there are many variations
in implementation, data scientists often use boosting with decision-
tree algorithms:
Decision trees
Decision trees are data structures in machine learning that work by
dividing the dataset into smaller and smaller subsets based on their
features. The idea is that decision trees split up the data repeatedly
until there is only one class left. For example, the tree may ask a
series of yes or no questions and divide the data into categories at
every step.
Boosting ensemble method

26
Boosting creates an ensemble model by combining several weak
decision trees sequentially. It assigns weights to the output of
individual trees. Then it gives incorrect classifications from the first
decision tree a higher weight and input to the next tree. After
numerous cycles, the boosting method combines these weak rules
into a single powerful prediction rule.
Boosting compared to bagging
Boosting and bagging are the two common ensemble methods that
improve prediction accuracy. The main difference between these
learning methods is the method of training. In bagging, data
scientists improve the accuracy of weak learners by training several
of them at once on multiple datasets. In contrast, boosting trains
weak learners one after another.

Neural networks
A neural network is a machine learning program, or model, that
makes decisions in a manner similar to the human brain, by using
processes that mimic the way biological neurons work together to
identify phenomena, weigh options and arrive at conclusions.
Every neural network consists of layers of nodes, or artificial
neurons—an input layer, one or more hidden layers, and an output
layer. Each node connects to others, and has its own associated
weight and threshold. If the output of any individual node is above
the specified threshold value, that node is activated, sending data to
the next layer of the network. Otherwise, no data is passed along to
the next layer of the network.
Neural networks rely on training data to learn and improve their
accuracy over time. Once they are fine-tuned for accuracy, they are
powerful tools in computer science and artificial intelligence,
allowing us to classify and cluster data at a high velocity. Tasks in
speech recognition or image recognition can take minutes versus

27
hours when compared to the manual identification by human
experts. One of the best-known examples of a neural network is
Google’s search algorithm.
Neural networks are sometimes called artificial neural
networks (ANNs) or simulated neural networks (SNNs). They are a
subset of machine learning, and at the heart of deep
learning models.

28
WEEK 4: ACTIVITY LOG OF THE FOURTH WEEK
Day 1: Neural network architecture
Day 2: Neural network architecture
Day 3: Backward propagation
Day 4: Backward propagation
Day 5: parameters and hyperparameters
Day 6: optimizers

DETAILED REPORT
Neural network architecture
The architecture of neural networks is made up of an input, output,
and hidden layer. Neural networks themselves, or artificial neural
networks (ANNs), are a subset of machine learning designed to mimic
the processing power of a human brain. Neural networks function by
passing data through the layers of an artificial neuron.
There are many components to a neural network architecture. Each
neural network has a few components in common:
Input - Input is data that is put into the model for learning and
training purposes.
Weight - Weight helps organize the variables by importance and
impact of contribution.
Transfer function - Transfer function is when all the inputs are
summarized and combined into one output variable.
Activation function - The role of the activation function is to decide
whether or not a specific neuron should be activated. This decision is
based on whether or not the neuron’s input will be important to the
prediction process.

29
Bias - Bias shifts the value given by the activation function.
Types of Neural Network Architectures
Neural networks are an efficient way to solve machine learning
problems and can be used in various situations. Neural networks
offer precision and accuracy. Finding the correct neural network for
each project can increase efficiency.
Standard neural networks
 Perceptron - A neural network that applies a mathematical
operation to an input value, providing an output variable.
 Feed-Forward Networks - A multi-layered neural network where
the information moves from left to right, or in other words, in a
forward direction. The input values pass through a series of
hidden layers on their way to the output layer.
 Residual Networks (Res Net) - A deep feed-forward network
with hundreds of layers.
Recurrent neural networks
Recurrent neural networks (RNNs) remember previously learned
predictions to help make future predictions with accuracy.
 Long shorterm memory network (LSTM) - LSTM adds extra
structures, or gates, to an RNN to improve memory capabilities.
 Echo state network (ESN) - A type of RNN hidden layers that are
sparsely connected.
Convolutional neural networks
Convolutional neural networks (CNNs) are a type of feed-forward
network that are used for image analysis and language processing.
There are hidden convolutional layers that form Conv Nets and
detect patterns. CNNs use features such as edges, shapes, and
textures to detect patterns. Examples of CNNs include:

30
 Alex Net - Contains multiple convolutional layers designed for
image recognition.
 Visual geometry group (VGG) - VGG is similar to AlexNet, but
has more layers of narrow convolutions.
 Capsule networks - Contain nested capsules (groups of neurons)
to create a more powerful CNN.
Generative adversarial networks
Generative adversarial networks (GAN) are a type of unsupervised
learning where data is generated from patterns that were discovered
from the input data. GANs have two main parts that compete against
one another:
 Generator - creates synthetic data from the learning phase of
the model. It will take random datasets and generate a
transformed image.
 Discriminator - decides whether or not the images produced
are fake or genuine.
GANs are used to help predict what the next frame in a video might
be, text to image generation, or image to image translation.
Transformer neural networks
Unlike RNNs, transformer neural networks do not have a concept of
timestamps. This enables them to pass through multiple inputs at
once, making them a more efficient way to process data.

Backward propagation
Backpropagation shortly known for backward propagation is a
method used to train artificial neural networks. Its goal is to reduce
the difference between the model’s predicted output and the actual
output by adjusting the weights and biases in the network.

31
Backpropagation is a powerful algorithm in deep learning, primarily
used to train artificial neural networks, particularly feed-forward
network. It works iteratively, minimizing the cost function by
adjusting weights and biases.
In each epoch, the model adapts these parameters, reducing loss by
following the error gradient. Backpropagation often utilizes
optimization algorithms like gradient descent or stochastic gradient
descent. The algorithm computes the gradient using the chain rule
from calculus, allowing it to effectively navigate complex layers in the
neural network to minimize the cost function.

Parameters and hyper parameters


A parameter is a variable that is learned from the data during the
training process. It is used to represent the underlying relationships
in the data and is used to make predictions on new data. A
hyperparameter, on the other hand, is a variable that is set before
the training process begins. It controls the behaviour of the learning
algorithm, such as the learning rate, the regularization strength, and

32
the number of hidden layers in a neural network. Hyperparameters
are not learned from the data but are instead set by the user or
determined through a process known as hyperparameter
optimization.

Optimizers
Optimizers in artificial intelligence (AI) and machine learning (ML)
are mathematical functions that help reduce error and increase
efficiency. They are used to find the best model parameters by
adjusting the weights and learning rate of a neural network.
Here are some examples of optimizers:
 Stochastic gradient descent
An algorithm that uses random probability to find the best model
parameters
 Adagra d
An algorithm that adjusts the learning rate of each parameter to
handle sparse data
 Mini-batch gradient descent
An improvement on standard gradient descent that updates model
parameters after each batch
 RMSprop
An optimizer that uses a different method for parameter update than
gradient descent with momentum and Adagra d
 Ada max
An extension of Adam that updates weights inversely proportional to
the scaled L2 norm of past gradients
 Adam W
An optimizer that regularizes the weight decay to prevent overfitting.
33
WEEK 5: ACTIVITY LOG FOR THE FIFTH WEEK
Day 1: computer vision
Day 2: computer vision
Day 3: CNN architecture
Day 4: CNN architecture
Day 5: Transfer learning in computer vision
Day 6: Advanced CNN

DETAILED REPORT
Computer vision
Computer vision is a field of artificial intelligence (AI) that allows
computers to analyse and interpret visual data, such as images and
videos, to identify and understand objects and people:
 How it works
Computer vision uses machine learning and neural networks to teach
computers to analyse visual data and make sense of it. It can then
use this information to perform tasks like object identification, facial
recognition, classification, recommendation, monitoring, and
detection.
 Applications
Computer vision is used in many industries, including energy, utilities,
manufacturing, and automotive. It can also be used in fashion
eCommerce, inventory management, patent search, furniture, and
the beauty industry.
 Advantages

34
Computer vision can analyse thousands of products or processes per
minute, which can help it notice issues or defects that humans might
miss.
Computer vision is a central component of many modern innovations
and solutions.

CNN architecture
Convolutional Neural Network consists of multiple layers like the
input layer, Convolutional layer, Pooling layer, and fully connected
layers.

Simple CNN architecture


The Convolutional layer applies filters to the input image to extract
features, the Pooling layer down samples the image to reduce
computation, and the fully connected layer makes the final
prediction. The network learns the optimal filters through
backpropagation and gradient descent.
How Convolutional Layers Works?
Convolution Neural Networks are neural networks that share their
parameters. Imagine you have an image. It can be represented as a
cuboid having its length, width (dimension of the image), and height
(the channel as images generally have red, green, and blue
channels).

35
Now imagine taking a small patch of this image and running a small
neural network, called a filter or kernel on it, with say, K outputs and
representing them vertically. Now slide that neural network across
the whole image, as a result, we will get another image with different
widths, heights, and depths. Instead of just R, G, and B channels now
we have more channels but lesser width and height. This operation is
called Convolution. If the patch size is the same as that of the image
it will be a regular neural network. Because of this small patch, we
have fewer weights.

Transfer learning in computer vision


Transfer learning in computer vision is a machine learning technique
that uses a pre-trained model to solve a new, related task. This
technique is faster and easier than training a new model from
scratch, which can be time-consuming and require a lot of data and
computing power.
Here's how transfer learning works in computer vision:
 Use pre-trained models
Start with a model that's already been trained on a similar task, such
as identifying dogs.
 Freeze some layers
Leave the early and middle layers of the model alone, which contain
knowledge about basic features like shapes and colours.
 Retrain the later layers
Fine-tune the model's later layers to adapt to the new task. This
process is called fine-tuning.
For example, if a model was originally trained to identify dogs, it can
be fine-tuned to identify cats using a smaller image set that highlights
the differences between the two.

36
Transfer learning can be used in many computer vision tasks, such as
object detection and image recognition.
Transfer learning is a critical technique in machine learning, offering
solutions to key challenges:
1. Limited Data: Acquiring extensive label data is often challenging
and costly. Transfer learning enables us to use pre-trained
models, reducing the dependency on large datasets.
2. Enhanced Performance: Starting with a pre-trained model,
which has already learned from substantial data, allows for
faster and more accurate results on new tasks—ideal for
applications needing high accuracy and efficiency.
3. Time and Cost Efficiency: Transfer learning shortens training
time and conserves resources by utilizing existing models,
eliminating the need for training from scratch.
4. Adaptability: Models trained on one task can be fine-tuned for
related tasks, making transfer learning versatile for various
applications, from image recognition to natural language
processing.

Advanced CNN
CNNs have evolved significantly over the years, leading to the
development of several advanced architectures. Some prominent
ones include:
 Res Net (Residual Neural Network): Explore the concept of
residual learning, where the network learns residual functions
to optimize the learning process.
 Inception Net(Google Net): Unravel the inception module,
which incorporates multiple convolutional filters of different
sizes within a single layer.

37
 Exception Net: Dive into this architecture that combines the
depth wise separable convolution and the inception module for
better performance.
 Mobile Net: Understand the lightweight design of Mobile Net,
optimized for mobile and embedded devices.
To further improve the performance and generalization of CNNs,
advanced training techniques are employed:
 Transfer Learning: Learn how to leverage pre-trained CNN
models, fine-tuning them for specific tasks, saving training time
and resources.
 Data Augmentation: Discover how data augmentation can
increase the diversity of the training data, reducing overfitting
and improving the model's robustness.
As you advance in your deep learning journey with CNNs, consider
these next steps:
 Object Detection with CNNs: Explore how CNNs can be used
for object detection, localizing, and classifying multiple objects
within an image.
 Semantic Segmentation: Understand the concept of semantic
segmentation, where each pixel in an image is classified into a
particular class or category.
 Instance Segmentation: Delve into the more advanced
technique of instance segmentation, which involves detecting
and delineating individual instances of objects in an image.

38
WEEK 6: ACTIVITY LOG OF THE SIXTH WEEK
Day 1: Basic NLP concepts and models
Day 2: Deep NLP(DNLP)
Day 3: Deep NLP(DNLP)
Day 4: Forecasting deep learning
Day 5: Seq2Seq models

DETAILED REPORT
Basic NLP models and concepts
In the data science domain, Natural Language Processing (NLP) is a
very important component for its vast applications in various
industries/sectors. For a human it’s pretty easy to understand the
language but machines are not capable enough to recognize it easily.
NLP is the technique that enables the machines to interpret and to
understand the way humans communicate.
At present social media is a golden data mine for natural languages,
be it any type of reviews from any online sites (Amazon, Google,
etc.), or simply posts from Twitter, Facebook, LinkedIn, or emails. The
business use cases (Classifications, Text Summarization, Triaging,
Interactive voice responses (IVR), Language translation, Chatbots)
might be different in each sector but NLP defines the core underlying
solution of these use cases.
Natural languages are a free form of text which means it is very much
unstructured in nature. So, cleaning and preparing the data to extract
the features are very important for the NLP journey while developing
any model. This article will cover below the basic but important steps
and show how we can implement them in python using different
packages and develop an NLP-based classification model.
A) Data Cleaning
39
B) Tokenization
C) Vectorization/Word Embedding
D) Model Development

Deep NLP(DNLP)
Deep Natural Language Processing (Deep NLP) is an emerging
subfield of Artificial Intelligence (AI) and Natural Language Processing
(NLP) that focuses on utilizing deep learning techniques to analyze
and interpret human language at a more sophisticated level
compared to traditional NLP approaches. While conventional NLP
methods rely on handcrafted rules and shallow statistical models,
Deep NLP leverages the power of neural networks and other deep
learning architectures to capture the intricate nuances and
complexities of natural language. This advanced approach enables
computers to better understand and process human language,
opening up new possibilities for various applications such as
intelligent tutoring systems, machine translation, information
extraction, and question answering.
Techniques and Components of Deep NLP
At the core of Deep NLP lie deep learning architectures specifically
designed for processing natural language. Neural networks,
particularly recurrent neural networks (RNNs) and transformers, have
proven to be effective in capturing the sequential nature of language
and learning meaningful representations of words, phrases, and
sentences. These architectures form the foundation upon which
various components of Deep NLP are built.
Syntactic parsing is one crucial component of Deep NLP, which
involves analysing the grammatical structure of sentences and
identifying the relationships between words. Deep learning-based
parsers have shown significant improvements over traditional rule-

40
based or statistical parsers, enabling more accurate and robust
syntactic analysis.
Semantic analysis is another essential aspect of Deep NLP, aiming to
understand the meaning of words and sentences beyond their
surface-level representations. Word sense disambiguation
techniques, powered by deep learning, help in identifying the correct
meaning of a word based on its context. Semantic role label, another
important task, involves identifying the semantic relationships
between predicates and their arguments in a sentence. Deep
learning models have greatly enhanced the accuracy and efficiency of
these semantic analysis tasks.

Forecasting deep learning


In a world inundated with data, the ability to predict future trends
and patterns has become invaluable. Time series forecasting stands
at the forefront of this endeavour, offering insights that drive
decision-making across industries, from finance to healthcare. As
technology evolves, the methodologies used in forecasting have also
undergone a significant transformation. Deep Learning, a subset of
machine learning characterized by its use of neural networks, has
emerged as a game-changer in the realm of time series
forecasting. This blog post delves into the intersection of deep
learning and time series analysis, exploring how this synergy is
revolutionizing our approach to predicting the future.
Time series data, with its sequential nature and temporal
dependencies, poses unique challenges and opportunities.
Traditional statistical methods, while effective, often fall short in
capturing the complexities inherent in this data. This is where deep
learning steps in, bringing its robust computational power and
flexibility to bear on the nuanced patterns hidden within time series
data.

41
Time series forecasting is a statistical technique that analyzes
sequential data to predict future events or trends. This type of data is
characterized by its chronological order, with each data point
associated with a specific time interval. The essence of time series
forecasting lies in understanding and leveraging the patterns found in
historical data to foresee future occurrences.
In diverse fields like meteorology, economics, and healthcare, time
series data plays a pivotal role. For instance, forecasting weather
patterns relies heavily on analysing past meteorological data.
Similarly, in finance, stock market trends are predicted based on
historical stock performance data. In the healthcare sector, the
progression of a disease or patient health trends is often predicted
using time series analysis.
Time series forecasting is more than just a linear extrapolation of
past data. It involves identifying various underlying factors such as
seasonality, trends, and cycles.

Seq2Seq models
Seq2Seq model or Sequence-to-Sequence model, is a machine
learning architecture designed for tasks involving sequential data. It
takes an input sequence, processes it, and generates an output
sequence. The architecture consists of two fundamental
components: an encoder and a decoder. Seq2Seq models have
significantly improved the quality of machine translation
systems making them an important technology. The article aims to
explore the fundamentals of the seq2seq model and its applications
along with its advantages and disadvantages.
The seq2Seq model is a kind of machine learning model that takes
sequential data as input and generates also sequential data as
output. Before the arrival of Seq2Seq models, the machine
translation systems relied on statistical methods and phrase-based
approaches. The most popular approach was the use of phrase-
42
based statistical machine translation (SMT) systems. That was not
able to handle long-distance dependencies and capture global
context.
Seq2Seq models addressed the issues by leveraging the power of
neural networks, especially recurrent neural networks. The concept
of seq2seq model was introduced in the paper titled “Sequence to
Sequence Learning with Neural Networks” by Google. The
architecture discussed in this research paper is fundamental
framework for natural language processing tasks. The seq2seq
models are encoder-decoder models. The encoder processes the
input sequence and transforms it into a fixed-size hidden
representation. The decoder uses the hidden representation to
generate output sequence. The encoder-decoder structure allows
them to handle input and output sequences of different lengths,
making them capable to handle sequential data. Seq2Seq models are
trained using a dataset of input-output pairs, where the input is a
sequence of tokens, and the output is also a sequence of tokens. The
model is trained to maximize the likelihood of the correct output
sequence given the input sequence.
The advancement in neural networks architectures led to the
development of more capable seq2seq model named transformers.
“Attention is all you need! “is a research paper that first introduced
the transformer model in the era of deep learning after which
language-related models have taken a huge leap. The main idea
behind the transformers model was that of attention layers and
different encoder and decoder stacks which were highly efficient to
perform language-related tasks.
Seq2Seq models have been widely used in NLP tasks due to their
ability to handle variable-length input and output sequences.
Additionally, the attention mechanism is often used in Seq2Seq
models to improve performance and it allows the decoder to focus
on specific parts of the input sequence when generating the output.
43
WEEK 7: ACTIVITY LOG FOR SEVENTH WEEK
Day 1: Advanced NLP models
Day 2: Generative AI
Day 3: Generative AI
Day 4: GAN
Day 5: LLM
Day 6: Transfer learning in NLP

DETAILED REPORT
Advanced NLP models
1. GPT-4 (Generative Pre-trained Transformer 4)
GPT-4 is a multimodal large language model created by OpenAI and
the fourth in its GPT series. It was released on March 14, 2023, and
has been made publicly available via ChatGPT Plus, with access to its
commercial API being provided via a waitlist. It was trained to predict
the next token.
2. GPT-3: Generative Pre-trained Transformer 3
GPT-3 is a massive NLP model that has revolutionized the field of NLP.
It has a whopping 175 billion parameters, the highest number of
parameters in any NLP model. GPT-3 can develop human like
responses and prompts, complete sentences, paragraphs, and even
whole articles.
3. BERT: Bidirectional Encoder Representations from Transformers
BERT is a pre-trained NLP model widely used in various NLP tasks,
such as sentiment analysis, question answering, and text
classification. It generates contextualized word embeddings, meaning

44
it can generate embeddings for words based on their context within a
sentence.
4. ELMO: Embeddings from Language Models
ELMO is a pre-trained NLP model that generates contextualized word
embeddings. ELMO uses a bidirectional language model that
captures the dependencies between words in both directions. It uses
these dependencies to generate embeddings for each word based on
its context within a sentence.
5. Roberta: Robustly Optimized BERT approach
Roberta is a variant of BERT trained on a larger text corpus with more
advanced training techniques. Roberta has achieved state-of-the-art
performance on many NLP benchmarks, including sentiment analysis,
text classification, and question answering.

Generative AI
Generative AI, sometimes called gen AI, is artificial intelligence (AI)
that can create original content—such as text, images, video, audio
or software code—in response to a user’s prompt or request.
Generative AI relies on sophisticated machine learning models
called deep learning models—algorithms that simulate the learning
and decision-making processes of the human brain. These models
work by identifying and encoding the patterns and relationships in
huge amounts of data, and then using that information to
understand users' natural language requests or questions and
respond with relevant new content.
AI has been a hot technology topic for the past decade, but
generative AI, and specifically the arrival of ChatGPT in 2022, has
thrust AI into worldwide headlines and launched an unprecedented
surge of AI innovation and adoption. Generative AI offers enormous
productivity benefits for individuals and organizations, and while it

45
also presents very real challenges and risks, businesses are forging
ahead, exploring how the technology can improve their internal
workflows and enrich their products and services. According to
research by the management consulting firm McKinsey, one third of
organizations are already using generative AI regularly in at least one
business function. Industry analyst Gartner projects more than 80%
of organizations will have deployed generative AI applications or used
generative AI application programming interfaces (APIs) by 2026.

GAN
GAN stands for Generative Adversarial Network, a machine learning
model that creates new data that resembles training data:
 How it works
GANs train two neural networks to compete against each other to
generate new data. The generator creates outputs that could be
mistaken for real data, while the discriminator tries to identify which
outputs are fake.
 What it can do
GANs can create new images from an existing image database, or
new music from a database of songs.
 How it was developed
Ian Goodfellow and his colleagues developed the concept of GANs in
June 2014.
GANs are a prominent framework for generative artificial
intelligence. They use adversarial training to generate new samples
with the same statistics as the training samples
Types of GANs
1. Vanilla GAN: This is the simplest type of GAN. Here, the
Generator and the Discriminator are simple a basic multilayer

46
perceptron. In vanilla GAN, the algorithm is really simple, it tries
to optimize the mathematical equation using stochastic
gradient descent.
2. Conditional GAN (CGAN): CGAN can be described as a deep
learning method in which some conditional parameters are put
into place.
 In CGAN, an additional parameter ‘y’ is added to the
Generator for generating the corresponding data.
 Labels are also put into the input to the Discriminator in
order for the Discriminator to help distinguish the real
data from the fake generated data.
3. Deep Convolutional GAN (DCGAN): DCGAN is one of the most
popular and also the most successful implementations of GAN.
It is composed of conv nets in place of multilayer perceptron.
 The Conv Nets are implemented without max pooling,
which is in fact replaced by convolutional stride.
 Also, the layers are not fully connected.
4. Laplacian Pyramid GAN (LAPGAN): The Laplacian pyramid is a
linear invertible image representation consisting of a set of
band-pass images, spaced an octave apart, plus a low-
frequency residual.
 This approach uses multiple numbers of Generator and
Discriminator networks and different levels of the
Laplacian Pyramid.

LLM
LLM could refer to Master of Laws or large language model:
 Master of Laws

47
LLM is an abbreviation for Master of Laws, which is a postgraduate
degree in legal education. It's a globally recognized degree that's
usually earned after one year of full-time legal studies. Law students
and professionals often pursue an LLM to improve their legal
expertise and career prospects. The abbreviation comes from the
Latin phrase Legum Magister, where Legum is the plural of lex, which
means law.
 Large language model
LLM is also an abbreviation for large language model, which is a type
of artificial intelligence (AI) program that can recognize and generate
text. LLMs are trained on large sets of data and are built on machine
learning, specifically a type of neural network called a transformer
model

Transfer learning in NLP


Transfer learning is an important tool in natural language processing
(NLP) that helps build powerful models without needing massive
amounts of data. This article explains what transfer learning is, why
it's important in NLP, and how it works.
Transfer learning is crucial in NLP due to its ability to leverage
knowledge learned from one task or domain and apply it to another,
typically related, task or domain. This approach is especially valuable
in NLP because:
1. Data Efficiency: NLP models often require large amounts of
label data to perform well. Transfer Learning allows models to
be pretrained on a large corpus of text, such as Wikipedia, and
then fine-tuned on a smaller, task-specific dataset. This reduces
the need for a massive amount of label data for every specific
task.
2. Resource Savings: Training large-scale language models from
scratch can be computationally expensive and time-consuming.
48
By starting with a pretrained model, the fine-tuning process
requires fewer resources, making it more accessible for
researchers and practitioners.
3. Performance Improvement: Pretrained models have already
learned useful linguistic features and patterns from vast
amounts of text. Fine-tuning these models on a specific task
often leads to improved performance compared to training a
model from scratch, especially when the task has a limited
amount of label data.

49
WEEK 8: ACTIVITY LOG FOR THE EIGHTH WEEK
Day 1: Autoencoders
Day 2: Autoencoders
Day 3: AI applications
Day 4: Speech Analytics
Day 5: Reinforcement learning

DETAILED REPORT
Autoencoders
An autoencoder is a type of artificial neural network that learns to
compress and reconstruct data:
 How it works
An autoencoder learns two functions: an encoding function that
transforms the input data, and a decoding function that recreates the
input data from the encoded representation.
 How it's trained
Autoencoders are trained using unsupervised machine learning to
discover latent variables of the input data.
 What it's used for
Autoencoders are used for many problems such as dimensionality
reduction, information retrieval tasks, anomaly detection, and image
segmentation.
 How it's evaluated
The most commonly used loss function for autoencoders is the
reconstruction loss, which measures the difference between the
model input and output.

50
 Common objective functions
Common objective functions used in autoencoders include Mean
Squared Error (MSE) and Binary Cross-Entropy (BCE).

AI applications
1. AI in Astronomy
2. AI in healthcare
3. AI in gaming
4. AI in finance
5. AI in data security
6. AI in social media
7. AI in travel and transport
8. AI in automative industry
9. AI in robotics

Speech analytics
Speech analytics is a technology that analyse audio recordings to
extract business intelligence and customer information. It's used to
improve customer communication and interactions, and can be
applied in a variety of ways, including:
 Call centres
Speech analytics can be used to analyse call recordings and
transcripts to improve customer service. It can help identify common
customer concerns or preferences, and can be used to train agents
and improve their performance.
 Product development
Speech analytics can be used to analyse customer feedback and
suggestions to help develop new products and features.
 Predictive analytics

51
Speech analytics can be used to anticipate changing customer needs
and predict possible slumps. This can help businesses prepare in
advance and avoid losses.
Speech analytics uses artificial intelligence (AI), machine learning,
natural language processing (NLP), and automatic speech recognition
(ASR) to analyse speech and convert it into structured data. It can
analyse speech in multiple languages and dialects.
Some things that speech analytics can detect include:
 Tone, pitch, energy, and speaker dominance
 Silence, cross talk, speech rate, hesitation, and pauses
 Sentiment of words, such as positive, negative, or neutral
 Anger, sadness, happiness, neutrality, anxiety, vulnerability,
misunderstanding, and complaint indicators

Reinforcement learning
Reinforcement Learning (RL) is a branch of machine learning focused
on making decisions to maximize cumulative rewards in a given
situation. Unlike supervised learning, which relies on a training
dataset with predefined answers, RL involves learning through
experience. In RL, an agent learns to achieve a goal in an uncertain,
potentially complex environment by performing actions and receiving
feedback through rewards or penalties.
Key Concepts of Reinforcement Learning
 Agent: The learner or decision-maker.
 Environment: Everything the agent interacts with.
 State: A specific situation in which the agent finds itself.
 Action: All possible moves the agent can make.

52
 Reward: Feedback from the environment based on the action
taken.
How Reinforcement Learning Works
RL operates on the principle of learning optimal behaviour through
trial and error. The agent takes actions within the environment,
receives rewards or penalties, and adjusts its behaviour to maximize
the cumulative reward. This learning process is characterized by the
following elements:
 Policy: A strategy used by the agent to determine the next
action based on the current state.
 Reward Function: A function that provides a scalar feedback
signal based on the state and action.
 Value Function: A function that estimates the expected
cumulative reward from a given state.
 Model of the Environment: A representation of the
environment that helps in planning by predicting future states
and rewards.

53
CHAPTER 5: OUTCOMES DESCRIPTION
In conclusion, I can say that this internship was a great experience.
Thanks to this project, I acquired deeper knowledge concerning my
technical skills, but I also personally benefited Currently AI & ML
which is a common part of the current world. Indeed, I grew more
independent in work and also in everyday life I realized that I could
do more things than I thought, like learning new things by myself
There are huge opportunities available for the students who want to
work in the field. Many private and public organizations hire who is
proficient in AI & ML and their online work and development. With
the rapid advent of the online industry, the demand of artificial
intelligence and machine learning professionals is increasing, and this
has created a huge job opportunity for the aspirants in the upcoming
days.

54
CHAPTER 6: FUTURE SCOPE
If someone has no experience in this land. Finding work can be a real
challenge. A successful internship can help an individual turn into a
career opportunity.so as a successful internship has some future
scope. Those are,
 Career opportunities
The demand for AI and ML professionals is increasing, with job
postings for roles like Machine Learning Engineers, Data Scientists,
and AI Researchers on the rise.
 Industry applications
AI and ML can be used in a variety of industries, including healthcare,
transportation, manufacturing, and customer experience.
 Data analysis
AI algorithms can improve their accuracy and precision through
iterations, which can benefit data analysts working with large
datasets.
 Smart home assistants
Smart home appliances like Amazon Echo and Google Home use AI to
perform tasks when spoken to.
 Education
AI can be used to improve the education experience.
 Research and development
There are many opportunities for professionals to lead research and
development in AI.
 Smart home sector

55
The scope of AI in the smart home sector is growing.
 Data collection
Humans generate a large amount of data every day, which can be fed
to machine learning algorithms to help companies make sales.
To succeed in the field, professionals should stay up to date with the
latest trends and advancements, and develop a thirst for continuous
learning

56

You might also like