0% found this document useful (0 votes)
7 views25 pages

Unit-1 Cloud Security

The document provides a comprehensive overview of cloud computing, detailing its architecture, types of services (IaaS, PaaS, SaaS, FaaS), deployment models (private, public, hybrid), and leading companies in the field. It also covers cloud security best practices and various use cases, emphasizing the benefits of scalability, efficiency, and cost-effectiveness. Additionally, the document introduces distributed computing system models, including physical, architectural, and fundamental models, along with their key components and functionalities.

Uploaded by

surbhisingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views25 pages

Unit-1 Cloud Security

The document provides a comprehensive overview of cloud computing, detailing its architecture, types of services (IaaS, PaaS, SaaS, FaaS), deployment models (private, public, hybrid), and leading companies in the field. It also covers cloud security best practices and various use cases, emphasizing the benefits of scalability, efficiency, and cost-effectiveness. Additionally, the document introduces distributed computing system models, including physical, architectural, and fundamental models, along with their key components and functionalities.

Uploaded by

surbhisingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT- 1

Introduction to Cloud Computing


Cloud Computing is a technology that allows you to store and access data and
applications over the internet instead of using your computer’s hard drive or a
local server.
In cloud computing, you can store different types of data such as files, images,
videos, and documents on remote servers, and access them anytime from any
device connected to the internet.
 Infrastructure: Cloud computing depends on remote network servers
hosted on the Internet to store, manage, and process data.
 On-Demand Access: Users can access cloud services and resources on
demand, scaling up or down without having to invest in physical hardware.
 Types of Services: Cloud computing offers various benefits such as cost
saving, scalability, reliability, and accessibility. It reduces capital
expenditures, and improves efficiency.
Architecture Of Cloud Computing
Cloud computing architecture refers to the components and sub-components
required for cloud computing. These components typically refer to:
1. Front end ( Fat client, Thin client)
2. Back-end platforms ( Servers, Storage )
3. Cloud-based delivery and a network ( Internet, Intranet, Intercloud )
Cloud Computing Architecture
1. Front End ( User Interaction Enhancement )
The User Interface of Cloud Computing consists of 2 sections of clients. The Thin
clients are the ones that use web browsers facilitating portable and lightweight
accessibilities and others are known as Fat Clients that use many functionalities
for offering a strong user experience.
2. Back-end Platforms ( Cloud Computing Engine )
The core of cloud computing is made at back-end platforms with several servers
for storage and processing computing. Management of Applications logic is
managed through servers and effective data handling is provided by storage.
The combination of these platforms at the backend offers the processing power,
and capacity to manage and store data behind the cloud.
3. Cloud-Based Delivery and Network
On-demand access to the computer and resources is provided over the Internet,
Intranet, and Intercloud. The Internet comes with global accessibility,
the Intranet helps in internal communications of the services within the
organization and the Intercloud enables interoperability across various cloud
services. This dynamic network connectivity ensures an essential component of
cloud computing architecture on guaranteeing easy access and data transfer.
Types of Cloud Computing Services

The following are the types of Cloud Computing:


1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
4. Function as as Service (FaaS)
Types of Cloud Computing
1. Infrastructure as a Service ( IaaS )
Infrastructure as a Service (IaaS) is a type of cloud computing that gives people
access to IT tools like virtual computers, storage, and networks through the
internet. You don’t need to buy or manage physical hardware. Instead, you pay
only for what you use.
Here are some key benefits of using IaaS:
 Flexibility and Control: IaaS comes up with providing virtualized
computing resources such as VMs, Storage, and networks facilitating users
with control over the Operating system and applications.
 Reducing Expenses of Hardware: IaaS provides business cost savings
with the elimination of physical infrastructure investments making it cost-
effective.
 Scalability of Resources: The cloud provides in scaling of hardware
resources up or down as per demand facilitating optimal performance with
cost efficiency.
2. Platform as a Service ( PaaS )
Platform as a Service (PaaS) is a cloud computing model where a third-party
provider offers the software and hardware tools needed to develop, test, and run
applications. This allows users to focus on building their applications without
worrying about managing servers or infrastructure.
For example, AWS Elastic Beanstalk is a PaaS offered by Amazon Web Services
that helps developers quickly deploy and manage applications while AWS takes
care of the needed resources like servers, load balancing, and scaling.
Here are some key benefits of using PaaS:
 Simplifying the Development: Platform as a Service offers application
development by keeping the underlying Infrastructure as an Abstraction. It
helps the developers to completely focus on application logic ( Code ) and
background operations are completely managed by the AWS platform.
 Enhancing Efficiency and Productivity: PaaS lowers the Management
of Infrastructure complexity, speeding up the Execution time and bringing
the updates quickly to market by streamlining the development process.
 Automation of Scaling: Management of resource scaling, guaranteeing
the program's workload efficiency is ensured by PaaS.
3. Software as a Service (SaaS)
Software as a Service (SaaS) is a way of using software over the internet instead
of installing it on your computer. The software is hosted by a company, and you
can use it just by logging in through a web browser. You don’t need to worry
about updates, maintenance, or storage the provider takes care of all that.
A common example is Google Docs. You can write and share documents online
without downloading any software.
Here are some key benefits of using SaaS:
 Collaboration And Accessibility: Software as a Service (SaaS) helps
users to easily access applications without having the requirement of local
installations. It is fully managed by the AWS Software working as a service
over the internet encouraging effortless cooperation and ease of access.
 Automation of Updates: SaaS providers manage the handling of
software maintenance with automatic latest updates ensuring users gain
experience with the latest features and security patches.
 Cost Efficiency: SaaS acts as a cost-effective solution by reducing the
overhead of IT support by eliminating the need for individual software
licenses.
4. Function as a Service (FaaS)
Function as a service (FaaS) is a cloud-computing service that allows customers
to run code in response to events, without managing the complex infrastructure.
You just write the code, upload it and the cloud provider runs it only when it's
needed. You pay only for the time your code runs.
For example, with AWS Lambda, you can write a function that resizes images
whenever someone uploads a photo to your website. You don’t need to keep a
server running all the time AWS runs your function only when a photo is
uploaded.
Here are some key benefits of using SaaS:
 Event-Driven Execution: FaaS helps in the maintenance of servers and
infrastructure making users worry about it. FaaS facilitates the developers
to run code as a response to the events.
 Cost Efficiency: FaaS facilitates cost efficiency by coming up with the
principle "Pay as per you Run" for the computing resources used.
 Scalability and Agility: Serverless Architectures scale effortlessly in
handing the workloads promoting agility in development and deployment.
Cloud Deployment Models
The following are the Cloud Deployment Models:
1. Private Cloud
It provides an enhancement in protection and customization by cloud resource
utilization as per particular specified requirements. It is perfect for companies
which looking for security and compliance needs.
2. Public Cloud
It comes with offering a pay-as-you-go principle for scalability and accessibility of
cloud resources for numerous users. it ensures cost-effectiveness by providing
enterprise-needed services.
3. Hybrid Cloud
It comes up with a combination of elements of both private and public clouds
providing seamless data and application processing in between environments. It
offers flexibility in optimizing resources such as sensitive data in private clouds
and important scalable applications in the public cloud.
Top Leading Cloud Computing Companies
The following tables show the top leading cloud computing companies along with
key details about their cloud services:

Company Cloud Service Name Key Offerings

1. AWS (Amazon Web Compute, Storage, AI/ML, Databases,


Amazon Services) Networking

2. Cloud computing, AI, Analytics, Hybrid


Azure
Microsoft Cloud

Google Cloud Platform AI/ML, Big Data, Kubernetes, Cloud


3. Google (GCP) Storage

4. Alibaba Alibaba Cloud IaaS, AI, Big Data, Cloud Security, CDN

Enterprise Cloud, Databases, SaaS,


Oracle Cloud
5. Oracle PaaS

AI, Quantum Computing, Hybrid Cloud,


IBM Cloud
6. IBM Security

7.
Salesforc Salesforce Cloud CRM, SaaS, AI, Analytics
e

8. Tencent Tencent Cloud AI, Gaming Cloud, IoT, Big Data

Cloud Security
Cloud security recommended to measures and practices designed to protect
data, applications, and infrastructure in cloud computing environments. The
following are some of the best practices of cloud security:
1. Data Encryption
Encryption is essential for securing data stored in the cloud. It ensures that data
remains unreadable to unauthorized users even if it is intercepted.
2. Access Control
Implementing strict access controls and authentication mechanisms helps ensure
that only authorized users can access sensitive data and resources in the cloud.
3. Multi-Factor Authentication (MFA)
MFA adds an extra layer of security by requiring users to provide multiple forms
of verification, such as passwords, biometrics, or security tokens, before gaining
access to cloud services.
Use Cases Of Cloud Computing
Cloud computing provides many use cases across industries and various
applications:
1. Scalable Infrastructure
Infrastructure as a Service (IaaS) enables organizations to scale computing
resources based on demand without investing in physical hardware.
2. Efficient Application Development
Platform as a Service (PaaS) simplifies application development, offering tools
and environments for building, deploying, and managing applications.
3. Streamlined Software Access
Software as a Service (SaaS) provides subscription-based access to software
applications over the internet, reducing the need for local installation and
maintenance.
4. Data Analytics
Cloud-based platforms facilitate big data analytics, allowing organizations to
process and derive insights from large datasets efficiently.
5. Disaster Recovery
Cloud-based disaster recovery solutions offer cost-effective data replication and
backup, ensuring quick recovery in case of system failures or disasters.
Distributed Computing System Models
Distributed computing is a system where processing and data storage is
distributed across multiple devices or systems, rather than handled by a single
central device. In this article, we will see Distributed Computing System Models.

Important Topics for Distributed Computing System Models


 Types of Distributed Computing System Models
o Physical Model

o Architectural Model

o Fundamental Model

Types of Distributed Computing System Models


1. Physical Model
A physical model represents the underlying hardware elements of a distributed
system. It encompasses the hardware composition of a distributed system in
terms of computers and other devices and their interconnections. It is primarily
used to design, manage, implement, and determine the performance of a
distributed system.
A physical model majorly consists of the following components:
1. Nodes
Nodes are the end devices that can process data, execute tasks, and
communicate with the other nodes. These end devices are generally the
computers at the user end or can be servers, workstations, etc.
 Nodes provision the distributed system with an interface in the
presentation layer that enables the user to interact with other back-end
devices, or nodes, that can be used for storage and database services,
processing, web browsing, etc.
 Each node has an Operating System, execution environment, and different
middleware requirements that facilitate communication and other vital
tasks.,
2. Links
Links are the communication channels between different nodes and intermediate
devices. These may be wired or wireless. Wired links or physical media are
implemented using copper wires, fiber optic cables, etc. The choice of the
medium depends on the environmental conditions and the requirements.
Generally, physical links are required for high-performance and real-time
computing. Different connection types that can be implemented are as follows:
 Point-to-point links: Establish a connection and allow data transfer
between only two nodes.
 Broadcast links: It enables a single node to transmit data to multiple
nodes simultaneously.
 Multi-Access links: Multiple nodes share the same communication
channel to transfer data. Requires protocols to avoid interference while
transmission.
3. Middleware
These are the softwares installed and executed on the nodes. By running
middleware on each node, the distributed computing system achieves a
decentralised control and decision-making. It handles various tasks like
communication with other nodes, resource management, fault tolerance,
synchronisation of different nodes and security to prevent malicious and
unauthorised access.
4. Network Topology
This defines the arrangement of nodes and links in the distributed computing
system. The most common network topologies that are implemented are bus,
star, mesh, ring or hybrid. Choice of topology is done by determining the exact
use cases and the requirements.
5. Communication Protocols
Communication protocols are the set rules and procedures for transmitting data
from in the links. Examples of these protocols include TCP, UDP, HTTPS, MQTT
etc. These allow the nodes to communicate and interpret the data.
2. Architectural Model
Architectural model in distributed computing system is the overall design and
structure of the system, and how its different components are organised to
interact with each other and provide the desired functionalities. It is an overview
of the system, on how will the development, deployment and operations take
place. Construction of a good architectural model is required for efficient cost
usage, and highly improved scalability of the applications.
The key aspects of architectural model are:
1. Client-Server model
It is a centralised approach in which the clients initiate requests for services and
severs respond by providing those services. It mainly works on the request-
response model where the client sends a request to the server and the server
processes it, and responds to the client accordingly.
 It can be achieved by using TCP/IP, HTTP protocols on the transport layer.
 This is mainly used in web services, cloud computing, database
management systems etc.
2. Peer-to-peer model
It is a decentralised approach in which all the distributed computing nodes,
known as peers, are all the same in terms of computing capabilities and can both
request as well as provide services to other peers. It is a highly scalable model
because the peers can join and leave the system dynamically, which makes it an
ad-hoc form of network.
 The resources are distributed and the peers need to look out for the
required resources as and when required.
 The communication is directly done amongst the peers without any
intermediaries according to some set rules and procedures defined in the
P2P networks.
 The best example of this type of computing is BitTorrent.
3. Layered model
It involves organising the system into multiple layers, where each layer will
provision a specific service. Each layer communicated with the adjacent layers
using certain well-defined protocols without affecting the integrity of the system.
A hierarchical structure is obtained where each layer abstracts the underlying
complexity of lower layers.
4. Micro-services model
In this system, a complex application or task, is decomposed into multiple
independent tasks and these services running on different servers. Each service
performs only a single function and is focussed on a specific business-capability.
This makes the overall system more maintainable, scalable and easier to
understand. Services can be independently developed, deployed and scaled
without affecting the ongoing services.
3. Fundamental Model
The fundamental model in a distributed computing system is a broad conceptual
framework that helps in understanding the key aspects of the distributed
systems. These are concerned with more formal description of properties that are
generally common in all architectural models. It represents the essential
components that are required to understand a distributed system’s behaviour.
Three fundamental models are as follows:
1. Interaction Model
Distributed computing systems are full of many processes interacting with each
other in highly complex ways. Interaction model provides a framework to
understand the mechanisms and patterns that are used for communication and
coordination among various processes. Different components that are important
in this model are -
 Message Passing - It deals with passing messages that may contain,
data, instructions, a service request, or process synchronisation between
different computing nodes. It may be synchronous or asynchronous
depending on the types of tasks and processes.
 Publish/Subscribe Systems - Also known as pub/sub system. In this the
publishing process can publish a message over a topic and the processes
that are subscribed to that topic can take it up and execute the process for
themselves. It is more important in an event-driven architecture.
2. Remote Procedure Call (RPC)
It is a communication paradigm that has an ability to invoke a new process or a
method on a remote process as if it were a local procedure call. The client
process makes a procedure call using RPC and then the message is passed to the
required server process using communication protocols. These message passing
protocols are abstracted and the result once obtained from the server process, is
sent back to the client process to continue execution.

1. Failure Model
This model addresses the faults and failures that occur in the distributed
computing system. It provides a framework to identify and rectify the faults that
occur or may occur in the system. Fault tolerance mechanisms are implemented
so as to handle failures by replication and error detection and recovery methods.
Different failures that may occur are:
 Crash failures - A process or node unexpectedly stops functioning.
 Omission failures - It involves a loss of message, resulting in absence of
required communication.
 Timing failures - The process deviates from its expected time quantum
and may lead to delays or unsynchronised response times.
 Byzantine failures - The process may send malicious or unexpected
messages that conflict with the set protocols.
2. Security Model
Distributed computing systems may suffer malicious attacks, unauthorised
access and data breaches. Security model provides a framework for
understanding the security requirements, threats, vulnerabilities, and
mechanisms to safeguard the system and its resources. Various aspects that are
vital in the security model are:

 Authentication: It verifies the identity of the users accessing the system.


It ensures that only the authorised and trusted entities get access. It
involves -
o Password-based authentication: Users provide a unique
password to prove their identity.
o Public-key cryptography: Entities possess a private key and a
corresponding public key, allowing verification of their authenticity.
o Multi-factor authentication: Multiple factors, such as passwords,
biometrics, or security tokens, are used to validate identity.
 Encryption:
o It is the process of transforming data into a format that is
unreadable without a decryption key. It protects sensitive
information from unauthorized access or disclosure.
 Data Integrity:
o Data integrity mechanisms protect against unauthorised
modifications or tampering of data. They ensure that data remains
unchanged during storage, transmission, or processing. Data
integrity mechanisms include:
o Hash functions - Generating a hash value or checksum from
data to verify its integrity.
o Digital signatures - Using cryptographic techniques to sign
data and verify its authenticity and integrity.

Clustering
 Introduction :
Cluster computing is a collection of tightly or loosely
connected computers that work together so that they act as
a single entity. The connected computers execute
operations all together thus creating the idea of a single
system. The clusters are generally connected through
fast local area networks (LANs)

Why is Cluster Computing important?


1. Cluster computing gives a relatively inexpensive, unconventional to the
large server or mainframe computer solutions.
2. It resolves the demand for content criticality and process services in a
faster way.
3. Many organizations and IT companies are implementing cluster computing
to augment their scalability, availability, processing speed and resource
management at economic prices.
4. It ensures that computational power is always available.
5. It provides a single general strategy for the implementation and
application of parallel high-performance systems independent of certain
hardware vendors and their product decisions.
A Simple Cluster Computing Layout

Types of Cluster computing :

1. High performance (HP) clusters :


HP clusters use computer clusters and supercomputers to solve advance
computational problems. They are used to performing functions that need
nodes to communicate as they perform their jobs. They are designed to
take benefit of the parallel processing power of several nodes.
2. Load-balancing clusters :
Incoming requests are distributed for resources among several nodes
running similar programs or having similar content. This prevents any
single node from receiving a disproportionate amount of task. This type of
distribution is generally used in a web-hosting environment.
3. High Availability (HA) Clusters :
HA clusters are designed to maintain redundant nodes that can act as
backup systems in case any failure occurs. Consistent computing services
like business activities, complicated databases, customer services like e-
websites and network file distribution are provided. They are designed to
give uninterrupted data availability to the customers.

Classification of Cluster :

1. Open Cluster :
IPs are needed by every node and those are accessed only through the
internet or web. This type of cluster causes enhanced security concerns.
2. Close Cluster :
The nodes are hidden behind the gateway node, and they provide
increased protection. They need fewer IP addresses and are good for
computational tasks.
Cluster Computing Architecture :
 It is designed with an array of interconnected individual computers and
the computer systems operating collectively as a single standalone
system.
 It is a group of workstations or computers working together as a single,
integrated computing resource connected via high speed interconnects.
 A node – Either a single or a multiprocessor network having memory, input
and output functions and an operating system.
 Two or more nodes are connected on a single line or every node might be
connected individually through a LAN connection.

Cluster
Computing Architecture

Components of a Cluster Computer :


1. Cluster Nodes
2. Cluster Operating System
3. The switch or node interconnect
4. Network switching hardware
Clu
ster Components

Advantages of Cluster Computing :

1. High Performance :
The systems offer better and enhanced performance than that of
mainframe computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers
to the network. Cluster computing is capable of combining several
additional resources or the networks to the existing computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function
as a proxy for the failed node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be
added.

Disadvantages of Cluster Computing :

1. High cost :
It is not so much cost-effective due to its high hardware and its design.
2. Problem in finding fault :
It is difficult to find which component has a fault.
3. More space is needed :
Infrastructure may increase as more servers are needed to manage and
monitor.

Applications of Cluster Computing :


 Various complex computational problems can be solved.
 It can be used in the applications of aerodynamics, astrophysics and in
data mining.
 Weather forecasting.
 Image Rendering.
 Various e-commerce applications.
 Earthquake Simulation.
 Petroleum reservoir simulation.

Virtualization

Virtualization is a way to use one computer as if it were many. Before


virtualization, most computers were only doing one job at a time, and a lot
of their power was wasted. Virtualization lets you run several virtual
computers on one real computer, so you can use its full power and do
more tasks at once.
In cloud computing, this idea is taken further. Cloud providers use
virtualization to split one big server into many smaller virtual ones, so
businesses can use just what they need, no extra hardware, no extra cost.

Let us understand virtualization by taking a real-world example:


Suppose there is a company that requires servers for four different
purposes:
 Store customer data securely
 Host an online shopping website
 Process employee payroll systems
 Run Social media campaign software for marketing
All these tasks require different things:
 The customer data server requires a lot of space and a Windows operating
system.
 The online shopping website requires a high-traffic server and needs a
Linux operating system.
 The payroll system requires greater internal memory (RAM) and must use
a certain version of the operating system.
In order to fulfill these requirements, the company initially configures four
individual physical servers, each for a different purpose. This implies
that the company needs to purchase four servers, keep them running, and
upgrade them individually, which is very expensive.
Now, by utilizing virtualization, the company can run these four
applications on a few physical servers through multiple virtual machines
(VMs). Each VM will behave as an independent server, possessing its own
operating system and resources. Through this means, the company can
cut down on expenses, conserve resources, and manage everything from
a single location with ease.
Working of Virtualization
Virtualizations uses special software known as hypervisor, to create many
virtual computers (cloud instances) on one physical computer. The Virtual
Machines behave like actual computers but use the same physical
machine.
Virtual Machines (Cloud Instances)
 After installing virtualization software, you can create one or more virtual
machines on your computer.
 Virtual machines (VMs) behave like regular applications on your system.
 The real physical computer is called the Host, while the virtual machines
are called Guests.
 A single host can run multiple guest virtual machines.
 Each guest can have its own operating system, which may be the same or
different from the host OS.
 Every virtual machine functions like a standalone computer, with its own
settings, programs, and configuration.
 VMs access system resources such as CPU, RAM, and storage, but they
work as if they are using their own hardware.

Hypervisors

A hypervisor is the software that gets virtualization to work. It serves as an


intermediary between the physical computer and the virtual machines.
The hypervisor controls the virtual machines' use of the physical resources
(such as the CPU and memory) of the host computer.
For instance, if one virtual machine wants additional computing capability,
it requests it from the hypervisor. The hypervisor ensures the request is
forwarded to the physical hardware, and it's accomplished.
There exist two categories of hypervisors:
Type 1 Hypervisor (Bare-Metal Hypervisor):
 The hypervisor is installed directly onto the computer hardware, without
an operating system sitting in between.
 It is highly efficient as it has a direct access to the resources of the
computer.
Type 2 Hypervisor:
 It is run over an installed operating system (such as Windows or macOS).
 It's employed when you need to execute more than one operating system
on one machine.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Types of Virtualization
1. Application Virtualization: Application virtualization enables remote
access by which users can directly interact with deployed applications
without installing them on their local machine. Your personal data and the
applications settings are stored on the server, but you can still run it
locally via the internet. It’s useful if you need to work with multiple
versions of the same software. Common examples include hosted or
packaged apps.
Example: Microsoft Azure lets people use their applications without
putting them on their own computers. Once this application is setup in the
cloud then employees can use it from any device, like a laptop or tablet. It
feels like the application is on their computer, but it’s really running on
Azure’s servers. This makes things easier, faster, and safer for the
company.
2. Network Virtualization: This allows multiple virtual networks to run
on the same physical network, each operating independently. You can
quickly set up virtual switches, routers, firewalls, and VPNs, making
network management more flexible and efficient.
Example: Google Cloud is an example of Network Virtualization.
Companies create their own networks using software instead of physical
devices with the help of Google Cloud. They can set up things like IP
addresses, firewalls, and private connections all in the cloud. This makes it
easy to manage, change, and grow their network without buying any
hardware. It saves time, money, and gives more flexibility.
Network Virtualization
3. Desktop Virtualization: Desktop virtualization is a process in which
you can create different virtual desktops that users can use from any
device like laptop, tablet. It’s great for users who need flexibility, as it
simplifies software updates and provides portability.
Example: GeeksforGeeks is a Edtech company which uses services
like Amazon WorkSpaces or Google Cloud (GCP) Virtual Desktops to
give its team members access to the same coding setup with all the tools
they required for the easy access of this team work. Now their team
members can easily log in from any device like a laptop, tablet, or even a
phone and use a virtual desktop that will run perfectly in the cloud. This
makes it easy for GeeksforGeeks company to manage, update, and keep
everything secure without requirement of physical computers for
everyone.
4. Storage Virtualization: This combines storage from different servers
into a single system, making it easier to manage. It ensures smooth
performance and efficient operations even when the underlying hardware
changes or fails.
Example: Amazon S3 is an example of storage virtualization because in
S3 we can easily store any amount of data from anywhere. Suppose a MNC
have lots of files and data of company to store. By Amazon S3 company
can store all their files and data in one place and access these from
anywhere without any kind of issue in secure way.
5. Server Virtualization: This splits a physical server into multiple virtual
servers, each functioning independently. It helps improve performance, cut
costs and makes tasks like server migration and energy management
easier.
Example: A startup company has a powerful physical server. This
company can use server virtualization software like VMware vSphere,
Microsoft Hyper-V or KVM to create more virtual machines(VMs) on that
one server.
Each VM here is an isolated server, that runs on their own operating
system( like Windows and Linux) and run it's own applications. For
example, a company might run A web server on one VM, A database
server on another VM, A file server on a third VM all on the same physical
machine. This reduces costs, makes it easier to manage and back up
servers, and allows quick recovery if one VM fails.

Server Virtualization
6. Data Virtualization: This brings data from different sources together
in one place without needing to know where or how it’s stored. It creates a
unified view of the data, which can be accessed remotely via cloud
services.
Example: Companies like Oracle and IBM offer solutions for this.

You might also like