0% found this document useful (0 votes)
36 views27 pages

Week-2 ML and CC

Uploaded by

coach5744vibes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views27 pages

Week-2 ML and CC

Uploaded by

coach5744vibes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Fundamentals of Machine learning


What is machine Learning?
• Machine Learning is the science (and art) of programming computers so they can learn from
data.

• Machine Learning is a field of study that gives computers the ability to learn without being
explicitly programmed.

• Machine Learning is the use and development of computer systems that can learn and adapt
without following explicit instructions, by using algorithms and statistical models to analyse and
draw inferences from patterns in data.
Definitions of Machine Learning
▶ Machine learning (ML) is a type of artificial intelligence (AI) that allows software
applications to become more accurate at predicting outcomes without being explicitly
programmed to do so. Machine learning algorithms use historical data as input to predict new
output values.
▶ Machine Learning is a branch of Artificial Intelligence that allows machines to learn and
improve from experience automatically. It is defined as the field of study that gives
computers the capability to learn without being explicitly programmed. It is quite different
than traditional programming.

Machine learning types


Machine learning are classified into the fallowing types
1. Supervised Learning
2. Unsupervised Learning
3. Semi- supervised Learning
4. Reinforcement Learning

Supervised Learning:
1. Supervised Learning is a type of machine learning algorithms which learns from the labelled
dataset. This machine learning method that needs supervision similar to the student-teacher
relationship.
2. In supervised learning, the training data you feed to the algorithm includes the desired solutions,
called labels.
3. In supervised Learning, a machine is trained with well-labeled data, which means some data is

Dept. of CSE | SPT 1


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

already tagged with correct outputs.


4. So, whenever new data is introduced into the system, supervised learning algorithms analyze this
sample data and predict correct outputs with the help of that labeled data.

From the above figure we can notice that machine is trained with well labelled dataset. Which
includes the features o the apple. hence when an apple is given as an input the model is able to
predict it as an apple.

Dept. of CSE | SPT 2


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

It is classified into two different categories of algorithms. These are as follows:


• Classification: It deals when output is in the form of a category such as Yellow, blue, right,
or wrong, etc.
• Regression: It deals when output variables are real values like age, height, etc.
This technology allows us to collect or produce data output from experience. It works the same way
as humans learn using some labeled data points of the training set. It helps in optimizing the
performance of models using experience and solving various complex computation problems.

Unsupervised Learning:
1. Unlike supervised learning, unsupervised Learning does not require classified or well-labeled
data to train a machine.

2. It aims to make groups of unsorted information based on some patterns and differences even
without any labelled training data. In unsupervised Learning, no supervision is provided, so no
sample data is given to the machines.
3. Hence, machines are restricted to finding hidden structures in unlabeled data by their own.

It is classified into two different categories of algorithms. These are as follows:


• Clustering: It deals when there is a requirement of inherent grouping in training data, e.g.,
grouping students by their area of interest.
• Association: It deals with the rules that help to identify a large portion of data, such as
students who are interested in ML and interested in AI.

Dept. of CSE | SPT 3


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Semi- supervised Learning:


1. Semi supervised learning algorithms can deal with partially labeled training data, usually a
lot of unlabeled data and a little bit of labeled data. This is called semi supervised learning
2. Some photo-hosting services, such as Google Photos, are good examples of this. Once you
upload all your family photos to the service, it automatically recognizes that the same person
A shows up in photos 1, 5, and 11, while another person B shows up in photos 2, 5, and 7.
3. This is the unsupervised part of the algorithm (clustering). Now all the system needs are for
you to tell it who these people are. Just one label per person, 4 and it can name everyone in
every photo, which is useful for searching photos

Reinforcement Learning
1. Reinforcement Learning is a very different type of learning. The learning system, is called an
agent in this context, can observe the environment, select, and perform actions, and get
rewards in return (or penalties in the form of negative rewards).

2. It must then learn by itself what is the best strategy, called a policy, to get the most reward
over time. A policy defines what action the agent should choose when it is in a given
situation.

3.

Dept. of CSE | SPT 4


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Differentiate between supervised machine learning and unsupervised machine learning

supervised machine learning unsupervised machine learning

Supervised learning algorithms are trained in Unsupervised learning algorithms are trained
labelled data. using unlabeled data.

Supervised learning model takes direct Unsupervised learning model does not take
feedback to check if it is predicting correct any feedback.
output or not.

Supervised learning model predicts the output. Unsupervised learning model finds the hidden
patterns in data.

In supervised learning, input data is provided Unsupervised learning does not need any
to the model along with the output supervision to train the model.

The goal of supervised learning is to train the The goal of unsupervised learning is to find
model so that it can predict the output when it the hidden patterns and useful insights from
is given new data. the unknown dataset.

Supervised learning needs supervision to train Unsupervised learning does not need any
model. supervision to train the model.

Supervised learning can be categorized in Unsupervised Learning can be classified in


Classification and Regression problems Clustering and Associations problems.

Supervised learning can be used for those Unsupervised learning can be used for those
cases where we know the input as well as cases where we have only input data and no
corresponding outputs. corresponding output data.
It includes various algorithms such as Linear It includes various algorithms such as
Regression, Logistic Regression, Support Clustering, KNN, and Apriori algorithm.
Vector Machine, Multi-class Classification,
Decision tree, etc.
Supervised learning model produces an Unsupervised learning model may give less
accurate result. accurate result as compared to supervised
learning.

Dept. of CSE | SPT 5


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Machine learning workflow


Machine learning workflows define which phases are implemented during a machine learning
project. The typical phases include
1. data collection,
2. data pre-processing,
3. building datasets,
4. model training and
5. refinement, evaluation,
6. and deployment to production

Machine Learning Workflow


Machine learning applications
1. Image Recognition:

It is used to identify objects, persons, places, digital images, etc. The popular use case of image
recognition and face detection is, Automatic friend tagging

2. Speech Recognition

Speech recognition is a process of converting voice instructions into text, and it is also known as
"Speech to text", or "Computer speech recognition." At present, machine learning algorithms are
widely used by various applications of speech recognition. Google assistant, Siri, Cortana,
and Alexa are using speech recognition technology to follow the voice instructions.

Dept. of CSE | SPT 6


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

3. Traffic prediction:

If we want to visit a new place, we take help of Google Maps, which shows us the correct path with
the shortest route and predicts the traffic conditions. It predicts the traffic conditions such as
whether traffic is cleared, slow-moving, or heavily congested with the help of two ways:

o Real Time location of the vehicle form Google Map app and sensors
o Average time has taken on past days at the same time.

4. Product recommendations:

Machine learning is widely used by various e-commerce and entertainment companies such
as Amazon, Netflix, etc., for product recommendation to the user. Whenever we search for some
product on Amazon, then we started getting an advertisement for the same product while internet
surfing on the same browser and this is because of machine learning.

5. Self-driving cars:

One of the most exciting applications of machine learning is self-driving cars. Machine learning
plays a significant role in self-driving cars. Tesla, the most popular car manufacturing company is
working on self-driving car. It is using unsupervised learning method to train the car models to
detect people and objects while driving.

6. Email Spam and Malware Filtering:

Whenever we receive a new email, it is filtered automatically as important, normal, and spam. We
always receive an important mail in our inbox with the important symbol and spam emails in our
spam box, and the technology behind this is Machine learning. Below are some spam filters used by
Gmail:

o Content Filter
o Header filter
o General blacklists filter
o Rules-based filters
o Permission filters

8. Online Fraud Detection:

Machine learning is making our online transaction safe and secure by detecting fraud transaction.
Whenever we perform some online transaction, there may be various ways that a fraudulent
transaction can take place such as fake accounts, fake ids, and steal money in the middle of a
transaction. So, to detect this, Feed Forward Neural network helps us by checking whether it is a

Dept. of CSE | SPT 7


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

genuine transaction or a fraud transaction.

For each genuine transaction, the output is converted into some hash va

10. Medical Diagnosis:

In medical science, machine learning is used for diseases diagnoses. With this, medical technology
is growing very fast and able to build 3D models that can predict the exact position of lesions in the
brain. It helps in finding brain tumours and other brain-related diseases easily.

Challenges in ML
1. Inadequate Training Data
The major issue that comes while using machine learning algorithms is the lack of quality as well
as quantity of data. Once the data is collected it has to be validated if data is sufficient for the use
cases

2.Poor quality of data:


data scientists claim that inadequate data, inaccurate data, noisy data, and unclean data are
extremely exhausting the machine learning algorithms. This leads to less accuracy in classification
and low-quality results. Hence, data quality can also be considered as a major common problem
while processing machine learning algorithms.

3. Non-representative Training Data : The training data should be representative of the new
cases to generalize well i.e., the data we use for training should cover all the cases that occurred
and that is going to occur. By using a non-representative training set, the trained model is not likely
to make accurate predictions.

4. Irrelevant/Unwanted Features If the training data contains a large number of irrelevant features
and enough relevant features, the machine learning system will not give the results as expected.

5. Overfitting the Training Data Whenever a machine learning model is trained with a huge amount
of data, it starts capturing noise and inaccurate data into the training data set. It negatively affects
the performance of the model.

6. Underfitting the Training data Underfitting is just the opposite of overfitting. Whenever a
machine learning model is trained with fewer amounts of data, and as a result, it provides
incomplete and inaccurate data and destroys the accuracy of the machine learning model.

7. Model Selection:
There are many different ML algorithms to choose from, and selecting the right one for a particular
problem is challenging.

8. Feature Engineering: Extracting useful features from raw data is a critical step in ML, but it can
be difficult to identify the most relevant features.

Dept. of CSE | SPT 8


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

9. Monitoring and maintenance:


As we know that generalized output data is mandatory for any machine learning model; hence,
regular monitoring and maintenance become compulsory for the same. Different results for
different actions require data change; hence editing of codes as well as resources for monitoring
them also become necessary.

10. Lack of skilled resources:


Although Machine Learning and Artificial Intelligence are continuously growing in the market, still
these industries are fresher in comparison to others. The lack of skilled resources in the form of
manpower is also an issue. Hence, we need manpower having in-depth knowledge of mathematics,
science, and technologies for developing and managing scientific substances for machine learning.

11. Deployment:
Deploying ML models in production environments can be difficult, as it requires expertise in both
ML and software engineering.
Building a model
1. Collecting Data: machines initially learn from the data. The quality and quantity of data that
will directly determine how good the predictive model can be.
2. Preparing the Data: Data preparation, where we load our data into a suitable place and
prepare it for use in our machine learning training.
3. Choosing a Model. There are many models that researchers and data scientists have created
over the years. Choosing right model is important.
4. Training the Model. The model is trained using the dataset and finding features and patterns.
5. Evaluating the Model. Evaluation allows us to test our model against data that has never been
used for training.
6. Parameter Tuning. it is possible further improve training model in any way. We can do this
by tuning our parameters.
7. Making Predictions. Prediction, or inference, is the step where we get to answer some
questions. This is the point of all this work, where the value of machine learning is realized.

What is Data Science?


• Data science is the study of data to extract meaningful insights for business. It is a
multidisciplinary approach that combines principles and practices.

• Data science is an interdisciplinary field that uses scientific methods, processes, algorithms, and
systems to extract knowledge and insights from noisy, structured, and unstructured data, and
apply knowledge from data across a broad range of application domains

Dept. of CSE | SPT 9


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

How Data Science works?


Data science uses techniques such as machine learning and artificial intelligence to extract
meaningful information and to predict future patterns and behaviors. Advances in technology, the
internet, social media, and the use of technology have all increased access to big data.

What do data scientist do?


• “More generally, a data scientist is someone who knows how to extract meaning from and
interpret data, which requires both tools and methods from statistics and machine learning, as
well as being human.
• She spends a lot of time in the process of collecting, cleaning, and munging data, because data is
never clean.
• This process requires persistence, statistics, and software engineering skills—skills that are also
necessary for understanding biases in the data, and for debugging logging output from code.
Data Science uses
Healthcare Sector
• Data science helps in the interpretation of medical images such as X Rays, MRIs, CT
scans, etc. If the medical history of a patient is known, data science can predict future
medical health.
• Diagnosis of various diseases like cancer, schizophrenia, Alzheimer's, etc can be
predicted with help of pattern matching and spectrum analysis. It can even provide a
better understanding of genetic tissues and the reaction to specific drugs or diseases.
Tourism Industry
• Data Science can give enhanced experience in tourism as well. It can predict flight delays
in advance and send messages to the passengers beforehand. It can track any deals or
offers on the dynamic pricing of hotels and flights and timely notify the customers about
those deals
Finance and Insurance Sector
• Data Science algorithms can help prevent fraud in the finance sector. It can check the
whole financial history and present situation of a person to predict if he would be able to
pay the debt thus minimizing the risk of any financial losses or debts.

Dept. of CSE | SPT 10


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Introduction to Cloud Computing


Essentials of Cloud Computing
What is Cloud
A cloud refers to a distinct IT environment that is designed for the purpose of remotely provisioning scalable
and measured IT resources.
The symbol used to denote the boundary of a cloud environment is as shown in fig. 1.

Figure 1. The symbol used to denote the boundary of a cloud environment.

Challenges with corporate data center environment

1. Capital Expenditure
In traditional data center, applications are hosted on on-premise data center. Huge capital
investment is required to build the data center. Funds used by a company to acquire, upgrade
and maintain physical assets such as property, plants, buildings, technology or equipment is
high.

2. Time to setup the infrastructure


The time taken to setup the infrastructure on premise is high.

3. Maintenance overhead
The enterprise needs to maintain the infrastructure to provide high performance to the clients
which consumes much cost.

4. Resource under/over utilization


Sometimes Resources are not fully utilized and sometimes the resources are over utilized in
traditional on-premise data center.

5. Reduced ROI
Maintenance overhead, Resource over/under utilization reduces Return on investment.

6. Security Risks
On-prem servers are physical assets, and face the same dangers the rest of the building
might beexposed to, including fires, floods, or break-ins.

Dept. of CSE | SPT 11


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Benefits of Cloud Computing


1. Pay as you go pricing model
On-demand access to pay-as-you-go computing resources on a short-term basis (such as
processors by the hour), and the ability to release these computing resources when they are no
longer needed.
2. Lower Capital Expenditure
Cloud computing requires lower initial capital investment and users pay only for capacity
used.
3. Secure Storage Management
Various Encryption methods are used to provide security to the data stored in cloud.
4. Device and Location Independent
Cloud resources can be accessed through internet from anywhere anytime.
5. Scalability.
By providing pools of IT resources, clouds can instantly and dynamically allocate IT
resources to cloud consumers, on-demand This empowers cloud consumers to scale their
cloud-based IT resources to accommodate processing fluctuations and peaks automatically
or manually
6. Highly automated
The power of automation in cloud is increased to eliminate the human error.

Features of Cloud Computing


The following six specific characteristics are common to the majority of cloud environments:
1. On-demand usage
Resources are provided on demand from cloud.

2. Ubiquitous access
Cloud resources can be accessed by anywhere anytime.

3. Multitenancy (and resource pooling)


It means that the Cloud provider pulled the computing resources to provide services to multiple
customers with the help of a multi-tenant model. There are different physical and virtual
resources assigned and reassigned which depends on the demand of the customer.

The customer generally has no control or information over the location of the provided resources
but is able to specify location at a higher level of abstraction

4. Elasticity
Elasticity is the automated ability of a cloud to transparently scale IT resources, as required in
responseto runtime conditions or as pre-determined by the cloud consumer or cloud provider

Dept. of CSE | SPT 12


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

5. Measured usage
The measured usage characteristic represents the ability of a cloud platform to keep track of the
usage of its IT resources, primarily by cloud consumers. Based on what is measured, the cloud
provider can charge a cloud consumer only for the IT resources actually used.
6. Resiliency
Resiliency can refer to redundant IT resources within the same cloud (but in different physical
locations) or across multiple clouds.

Cloud Deployment Models

A cloud deployment model represents a specific type of cloud environment, primarily


distinguished byownership, size, and access.
There are four common cloud deployment models:

• Public cloud
• Community cloud
• Private cloud
• Hybrid cloud

1. Public cloud
• Public cloud makes it possible for anybody to access systems and services . The public
may be secure as it is open to everyone.
• The public cloud is one in which cloud infrastructure services are provided over the
internet to the general people or major industry groups.
• The infrastructure in this cloud model owned by the entity that delivers the cloud
services, not by the consumer.
• It is a type of cloud boosting that allows customers and users to easily access systems and
services.

Dept. of CSE | SPT 13


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

• This form of cloud computing is an excellent example of cloud hosting, in which service
providers supply services to a variety of customers.
• In this arrangement, storage backup and retrieval services are given for free, as a
subscription, or on a per-user basis. Example: Google App Engine etc.

Advantages of Public Cloud Model: Disadvantages of Public Cloud


Models:
• Minimal Investment: Because it is a • Less sure: Public cloud is less secure
pay-per-use service, there is no as resources are public so there is no
substantial upfront, making it guaranties of high-level security.
excellent for enterprises that require
immediate access to resources. • Low customization: it is accessed by
• No setup cost: The entire many public so it can not be
infrastructure is fully subsidized by customized according to personal
the cloud service providers, thus there requirements.
is no need to set up any hardware.
• Infrastructure Management is not
required: Using the public cloud does
not necessitate infrastructure
management.
• No maintenance: The maintenance
work is done by the service provider
(Not users).
• Dynamic Scalability: To fulfill
company's needs, on-demand
resources are accessible.

2.Private Cloud:
• The private cloud deployment model is the exact opposite of the public cloud deployment
model. t's one-on-one environment for a single user (customer). There is no need to share your
hardware with anyone else.
• The distinction between private and public clouds is in how you handle all of the hardware, It is
also called the "internal cloud" & it is refer to the ability to access systems and services with in
a given border or organization.
• The cloud platform it is implemented in a cloud-based secure environment that is protected by
powerful firewalls and under the supervision of an organization's IT department. The private
cloud gives greater flexibility of control over cloud resources.

Dept. of CSE | SPT 14
WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

• Advantages of Private Cloud Model: Disadvantages of Private Cloud


Model:
• Better Control: You are the sole • Less scalable: Private clouds are
owner of the property. You gain scaled within a certain range as there
complete command over service is less number of clients.
integration, IT operations, policies, • Costly: Private clouds are costlier as
and user behaviour . they provide personalized facilities.
• Data Security and Privacy It's suitable
for storing corporate information to
which only authorized staff have
access by segmenting resources within
the same infrastructure, improved
access and security can be achieved.
• Supports Legacy Systems: This
approach is designed to work with
legacy systems that are unable to
access the public cloud.
• Customization: Unlike a public cloud
deployment, a private cloud allows a
company to tailor it solution to meet
its specific needs.

Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a
safe environment while taking advantage of the public cloud’s cost savings. Organizations
can move data and applications between different clouds using a combination of two or more
cloud deployment methods, depending on their needs.

Advantages of Hybrid Cloud Model:


• Flexibility and control: Businesses with more flexibility can design personalized solutions that
meet their particular needs.
• Cost: Because public clouds provide scalability, you’ll only be responsible for paying for the
extra capacity if you require it.
• Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced. Disadvantages of Hybrid Cloud Model:
• Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both public
and private cloud. So, it is complex.
• Slow data transmission: Data transmission in the hybrid cloud takes place through the public
cloud so latency occurs

Dept. of CSE | SPT 15


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Cloud Service Models


A cloud service model represents a specific, pre-packaged combination of IT resources offered by a cloud
provider.
Three common cloud service models have become widely established and formalized:
• Infrastructure-as-a-Service(IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS)

Infrastructure as a Service (IaaS)


1. It provides access to virtualized cloud infrastructure which comprises of Compute, Storage
and network resources.
2. The general purpose of an IaaS environment is to provide cloud consumers with a high level
of control and responsibility over its configuration and utilization.
3. The IT resources provided by IaaS are generally not pre-configured, placing the
administrative responsibility directly upon the cloud consumer.
4. This model is therefore used by cloud consumers that require a high level of control over the
cloud- based environment they intend to
create.
Example of IaaS services are:
1. Elastic Compute Cloud (EC2) from Amazon Web Services (AWS)
2. Virtual machines from Microsoft Azure
3. Google Compute Engine (GCE) from Google Cloud Platform

Platform as a Service (PaaS)

Dept. of CSE | SPT 16


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

1. The PaaS delivery model represents a pre-defined “ready-to-use” environment typically


comprised of already deployed and configured IT resources.
2. It allows developers to build applications and services over the internet.
3. A PaaS provider hosts the hardware and software on its own infrastructure. As a result,
PaaS frees users from having to install in-house hardware and software to develop or run a
new application.
4. Thus, the development and deployment of the application take place independent of the
hardware.
5. Runtime, platform, and Tools are provided by CSP.
6. No management effort to the consumer.
7. Less control over the infrastructure.
8. PaaS provides control over the application
code.
Example of PaaS services are:
1. App Cloud from Salesforce.com
2. Google App Engine(GAE) from Google Cloud Platform
3. OpenShift from Red Hat Inc.
4. Oracle cloud platform from Oracle

Software as a Service(SaaS)

1. The SaaS delivery model is typically used to make a reusable cloud service widely available (often
commercially) to a range of cloud consumers.
2. Software as a Service is a way of delivering services and applications over the Internet. Instead of
installing and maintaining software, we simply access it via the Internet, freeing ourselves from the
complex software and hardware management. It removes the need to install and run applications on
our own computers or in the data centers eliminating the expenses of hardware as well as software
maintenance.
3. Readily available software applications.
4. Accessed through web browser.
5. No visibility over backend.
6. Billed based on subscription.Example of
Software as a service:
1. Microsoft Office 365
2. Google Apps
3. Microsoft Teams

Dept. of CSE | SPT 17


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Comparison of cloud service models.

Serverless services
1. It is more suitable for handling asynchronous events.
2. Serverless services will be in active till user invokes it.
3. It allows you to focus on productive business requirements rather than IT infrastructure setup and
management.
4. Capacity provisioning and scalability is managed by cloud service provider. Hence it reduces Total
Cost of Owner.
5. It allows high availability and fault tolerant to the architecture. It is also known as function as a
service.

Serverless service in web applications


1. User clicks for weather update
2. Serverless function gets invoked
3. Retrieves data
4. Sends the response to user

Major Cloud service providers.

Dept. of CSE | SPT 18


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Virtualization
1. Virtualization is the process of converting a physical IT resource into a virtual IT resource.
2. The first step in creating a new virtual server through virtualization software is the allocation
of physical IT resources, followed by the installation of an operating system.
3. Virtual servers use their own guest operating systems, which are independent of the operating
systemin which they were created.
4. Both the guest operating system and the application software running on the virtual server are
unaware of the virtualization process, meaning these virtualized IT resources are installed
andexecuted as if they were running on a separate physical server.
5. Hypervisor manages resources for Virtual Machines.

Fig 2: Virtualization
Types of Virtualizations

Benefits of Virtualized Infrastructure


• Cost savings
• Agility
• Reduced downtime

Dept. of CSE | SPT 19


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Explore the cloud service providers and services offered by them

1. Amazon Web Services


AWS is a subsidiary of Amazon that provides on-demand cloud computing services to
its users. Given below is the list of services offered by AWS:

Storage: - Amazon Simple Storage Service (S3) provides scalable data storage with backup and
replication. Compute: - Amazon Elastic Compute Cloud (EC2) provides virtual servers or instances
for computing. It isauto-scalable as per the requirement.

Database: - Amazon Relational Database Service provides a fully managed database service that
includes Oracle, SQL, MySQL, etc

Developer Tool: - AWS CodeCommit provides fully managed private GIT repositories to store code
and manage versions. Apart from these services, AWS provides AWS CodePipeline, AWS
CodeBuild, AWS CodeDeploy, AWS CLoud9 to support development and deployment.

Machine Learning: - Amazon SageMaker provides services to quickly build, train and deploy
models at a big scale. AWS offers ML service for speech recognition, language translation,
chatbots, and many other scenarios, with high speed and scalability.

2. Azure
Some of the popular services of Azure cloud provider are:
Azure Active Directory: - Azure Active Directory (AD) is one of the most popular cloud computing
services from Microsoft Azure. Belonging to the Identity section, it is a universal identity platform
to ensure the management and security of identities.

Azure CDN: - Its server is designed in a way that it can integrate a lot of storage space, web apps,
and Azure cloud services. This is why Azure CDN is used to deliver content securely all across the
world.

Azure Data Factory: - Azure Data Factory ingests data from several sources to automate data
transmission and movement. Azure Data Factory creates and monitors workflows.

Azure SQL: - Azure SQL database adds a readily available data storage facility with enhanced
performance for enterprises.

Azure Backup: - Azure Backup allows simple data protection tools from the Azure Web app
services, tokeep your data protected from ransomware or loss of any kind.

Dept. of CSE | SPT 20


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Containers
1. Containers provide light weight runtime environment for app deployment.
2. It bundles its applications with all the required dependencies, along with its configuration in
a singleimage.
3. It uses components of host kernel from the host operating system in order to provide
deploymentenvironment to deploy applications.

Benefits of Containers
1. Rapid Scalability: As containers boot up time is about fraction of seconds; they
provide rapidscalability.
2. Uses less resources: Each container does not have a guest operating system. As a result, it
consumes very less resources like storage and memory.
3. Greater efficiency: Greater efficiency is achieved as the container requires less resources to
spin up.
4. Increased portability: Containers are highly portable and can run on any infrastructure.

Containers and Virtual Machines

Containerization using Docker


• Open-source Docker Engine.
• Provide both Linux and Windows based containers
• Containerized applications run anywhere consistently
• Docker CLI, API
• Docker Image
• Docker File

Dept. of CSE | SPT 21


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Container Orchestration
Container orchestration is a process for managing the deployment, integration, scaling, and lifecycles of
containerized software and applications in complex, dynamic environments.
• Automates the container management.
• Integrates well with CI/CD workflows
• Manages the container availability
• Manages Load balancing and routing
• Enables secure interaction among containers
• Examples: Docker Swarm, Kubernetes, Apache Mesos, Amazon EKS, GKE

Cloud Native Application Development

What is Cloud Native?

Cloud native is the software approach of building, deploying, and managing modern applications in cloud
computing environments.

What is cloud-native applications?

Cloud-native applications are software programs that consist of multiple small, interdependent services
called microservices.

By using the cloud-native approach, software developers break the functionalities into smaller
microservices. This makes cloud-native applications more agile as these microservices work independently
and take minimal computing resources to run.

What is cloud-native application development?

Cloud-native application development describes how and where developers build and deploy cloud-native
applications.

Developers adopt specific software practices to decrease the software delivery timeline and deliver accurate
features that meet changing user expectations.

Some common cloud-native development practices are:

Continuous integration

Continuous integration (CI) is a software practice in which developers integrate changes into a shared code
base frequently and without errors. Small, frequent changes make development more efficient because you
can identify and troubleshoot issues faster.

Continuous delivery

Continuous delivery (CD) is a software practice that supports cloud-native development. With CD,
development teams ensure that the microservices are always ready to be deployed to the cloud.

Dept. of CSE | SPT 22


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

DevOps

DevOps is a software culture that improves the collaboration of development and operations teams. DevOps
practices allow organizations to speed up the software development lifecycle. Developers and operation
engineers use DevOps tools to automate cloud-native development.

Serverless

Serverless computing is a cloud-native model where the cloud provider fully manages the underlying server
infrastructure. Developers use serverless computing because the cloud infrastructure automatically scales
and configures to meet application requirements. Developers only pay for the resources the application uses.
The serverless architecture automatically removes compute resources when the app stops running.

Microservices

Microservices are small, independent software components that collectively perform as complete cloud-
native software. Each microservice focuses on a small, specific problem. Microservices are loosely coupled,
which means that they are independent software components that communicate with each other. Developers
make changes to the application by working on individual microservices. That way, the application
continues to function even if one microservice fails.

Containers

Containers are the smallest compute unit in a cloud-native application. They are software components that
pack the microservice code and other required files in cloud-native systems. By containerizing the
microservices, cloud-native applications run independently of the underlying operating system and
hardware. This means that software developers can deploy cloud-native applications on premises, on cloud
infrastructure, or on hybrid clouds.

Administrative console and Cloud SDK


The Admin console is where administrators manage Google services for people in an organization.
The Cloud Administration Console supports two administrative roles:
• Super Admin
• Help Desk Admin

Super Admin

Super Admins have unrestricted privileges in the Cloud Administration Console, including the ability to add
or edit other administrators. Super Admins are responsible for setting up SecurID for the first time, then
maintaining, updating, and troubleshooting the deployment as necessary.

Help Desk Admin

Help Desk Admins assist users who authenticate with the Cloud Authentication Service.

Cloud SDK provides language-specific Cloud Client Libraries supporting each language’s natural
conventions and styles. This makes it easier for you to interact with Google Cloud APIs in your language
of choice.

Dept. of CSE | SPT 23


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Google Cloud SDK is a set of tools which are used to manage applications and resources that are hosted on a
Google Cloud Platform.

Cloud Billing
A Cloud Billing account defines who pays for a given set of Google Cloud resources. To use
Google Cloud services, you must have a valid Cloud Billing account, and must link it to your
Google Cloud projects. Your project's Google Cloud usage is charged to the linked Cloud Billing
account.

You must have a valid Cloud Billing account even if you are in your free trial period or if you only
use Google Cloud resources that are covered by the Google Cloud Free Tier.

SLA

A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud
services provider and the client. Earlier, in cloud computing all Service Level Agreements were
negotiated between a client and the service consumer. Nowadays, with the initiation of large utility-
like cloud computing providers, most Service Level Agreements are standardized until a client
becomes a large consumer of cloud services. Service level agreements are also defined at different
levels which are mentioned below:
• Customer-based SLA
• Service-based SLA
• Multilevel SLA

Some service level agreements are enforceable as contracts, but most are agreements or contracts
that are more in line with an operating level agreement (OLA) and may not be constrained by law. It
is okay to have a lawyer review documents before making any major settlement with a cloud service
provider. Service level agreements usually specify certain parameters, which are mentioned below:

o Availability of the Service (uptime)


o Latency or the response time
o Service component’s reliability
o Each party accountability
o Warranties

If a cloud service provider fails to meet the specified targets of the minimum, the provider will have
to pay a penalty to the cloud service consumer as per the agreement. So, service level agreements are
like insurance policies in which the corporation has to pay as per the agreement if an accident
occurs.

Dept. of CSE | SPT 24


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

BIG DATA
What is Big Data
Big data is a collection of data [both structured and unstructured] that is huge in volume, yet growing
exponentially with time.
Examples:-
1. The statistic shows that 500+terabytes of new data get ingested into the databases of social media
site Facebook, every day. This data is mainly generated in terms of photo and video uploads,
message exchanges, putting comments etc.
2. A single Jet engine can generate 10+terabytes of data in 30 minutes of flight time. With many
thousand flights per day, generation of data reaches up to many Petabytes.

Four Vs of Big Data/ Characteristics of Big Data

Big data can be described by the following characteristics:

• Volume
• Variety
• Velocity
• Variability

(i) Volume – The name Big Data itself is related to a size which is enormous. Size of data plays a very
crucial role in determining value out of data.

(ii) Variety – Variety refers to heterogeneous sources and the nature of data, both structured and
unstructured.

(iii) Velocity – The term ‘velocity’ refers to the speed of generation of data. How fast the data is generated
and processed to meet the demands, determines real potential in the data.

(iv) Variability – This refers to the inconsistency which can be shown by the data at times, thus hampering
the process of being able to handle and manage the data effectively.

Sources of Data
• Data collected from social Media sites like Facebook , WhatsApp , Twitter , YouTube ,
Instagram , etc.
• Sensor placed in various places of the city that gathers data on temperature, humidity, etc. A
camera placed in sensitive areas like airports, railway stations, and shopping malls create a lot
of data.
• IOT Appliance: Electronic devices that are connected to the internet create data for their smart
functionality, examples are a smart TV, smart washing machine, smart coffee machine, smart
AC, etc. It is machine-generated data that are created by sensors kept in various devices.

Dept. of CSE | SPT 25


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

• Customer feedback on the product or service of the various company on their website creates
data. For Example, retail commercial sites like Amazon, Walmart, Flipkart, and Myntra gather
customers feedback on the quality of their product and delivery time.
• E-commerce: In e-commerce transactions, business transactions, banking, and the stock market,
lots of records stored are considered one of the sources of big data. Payments through credit
card, debit cards, or other electronic ways, all are kept recorded as data.
• debit cards, or other electronic ways, all are kept recorded as data.
• Global Positioning System (GPS): GPS in the vehicle helps in monitoring the movement of the
vehicle to shorten the path to a destination to cut fuel, and time consumption. This system
creates huge data on vehicle position and movement.
• Transactional Data: Transactional data, as the name implies, is information obtained through
online and offline transactions at various points of sale. The data contains important information
about transactions, such as the date and time of the transaction, the location where it took place
etc.

Role of Big Data in AI & ML

1. Big Data helps machine learning by providing a variety of data so machines can learn
more or multiple samples or training data.
2. In such ways, businesses can accomplish their dreams and get the benefit of big data
using MLalgorithms.
3. The larger the amount of data that Artificial Intelligence systems can access, the more
machines canlearn and therefore more accurate and efficient their results will be.
4. As AI becomes smarter, less human intervention is required when it comes to process
control and machine monitoring.

Artificial Intelligence applied to Big Data provides the following benefits:


• Deviation detection: AI can analyse the data provided by Big Data to detect unusual
occurrences in it. ...
• Probability of future outcome: AI can use a known condition with an X probability of
influencing the future outcome to determine the probability of that outcome.
• Pattern recognition: Detect patterns from large data structures that humans would be
unable to recognize.

Dept. of CSE | SPT 26


WEEK-2 MACHINE LEARNING & CLOUD COMPUTING

Important questions:

1. What is machine learning? List the types of machine learning?


2. What is supervised machine learning algorithms?
3. What is un-supervised machine learning algorithms.?
4. Difference between supervised and un-supervised algorithms?
5. What are the challenges associated with machine learning algorithm?
6. What are the applications of machine learning ?
7. What is cloud? What are the three-deployment model of cloud?
8. Which are the services provided by cloud?
9. What is big data? What are the sources of Big-data?
10. What are the four v’s of big data?

Dept. of CSE | SPT 27

You might also like