5.unit Iii
5.unit Iii
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Server Virtualization
Server virtualization is the partitioning of a physical server into smaller virtual servers to help
maximize our server resources. In server virtualization the resources of the server itself are hidden,
or masked, from users, and software is used to divide the physical server into multiple virtual
environments, called virtual or private servers.
Server Virtualization is most important part of Cloud Computing. It is composed of two words,
cloud and computing. Cloud means Internet and computing means to solve problems with help
of computers. Computing is related to CPU & RAM in digital world. Now Consider situation,
You are using Mac OS on your machine but particular application for your project can be
operated only on Windows. You can either buy new machine running windows or create virtual
environment in which windows can be installed and used. Second option is better because of
less cost and easy implementation. This scenario is called Virtualization. In it, virtual CPU,
RAM, NIC and other resources are provided to OS which it needed to run. This resources is
virtually provided and controlled by an application called Hypervisor. The new OS running on
virtual hardware resources is collectively called Virtual Machine (VM).
Figure
– Virtualization on local machine
Now migrate this concept to data centers where lot of servers (machines with fast CPU, large
RAM and enormous storage) are available. Enterprise owning data centre provide resources
requested by customers as per their need. Data centers have all resources and on user request,
particular amount of CPU, RAM, NIC and storage with preferred OS is provided to users. This
concept of virtualization in which services are requested and provided over Internet is
called Server Virtualization.
To implement Server Virtualization, hypervisor is installed on server which manages and
allocates host hardware requirements to each virtual machine. This hypervisor sits over server
hardware and regulates resources of each VM. A user can increase or decrease resources or can
delete entire VM as per his/her need. This servers with VM created on them is called server
virtualization and concept of controlling this VM by users through internet is called Cloud
Computing.
Each server in server virtualization can be restarted separately without affecting the operation
of other virtual servers.
Server virtualization lowers the cost of hardware by dividing a single server into several
virtual private servers.
One of the major benefits of server virtualization is disaster recovery. In server virtualization,
data may be stored and retrieved from any location and moved rapidly and simply from one
server to another.
It enables users to keep their private information in the data centers.
The major drawback of server virtualization is that all websites that are hosted by the server
will cease to exist if the server goes offline.
The effectiveness of virtualized environments cannot be measured.
It consumes a significant amount of RAM.
Setting it up and keeping it up are challenging.
Virtualization is not supported for many essential databases and apps.
Desktop Virtualization
Desktop virtualization is technology that lets users simulate a workstation load to access a desktop
from a connected device. It separates the desktop environment and its applications from the
physical client device used to access it. Desktop virtualization is a key element of digital
workspaces and depends on application virtualization.
Since the user devices is basically a display, keyboard, and mouse, a lost or stolen device presents
a reduced risk to the organization. All user data and programs exist in the desktop virtualization
server, not on client devices.
Local desktop virtualization means the operating system runs on a client device
using hardware virtualization, and all processing and workloads occur on local hardware. This
type of desktop virtualization works well when users do not need a continuous network connection
and can meet application computing requirements with local system resources. However, because
this requires processing to be done locally you cannot use local desktop virtualization to share
VMs or resources across a network to thin clients or mobile devices.
The three most popular types of desktop virtualization are Virtual desktop infrastructure (VDI),
Remote desktop services (RDS), and Desktop-as-a-Service (DaaS).
VDI simulates the familiar desktop computing model as virtual desktop sessions that run on VMs
either in on-premises data center or in the cloud. Organizations who adopt this model manage the
desktop virtualization server as they would any other application server on-premises. Since all end-
user computing is moved from users back into the data center, the initial deployment of servers to run
VDI sessions can be a considerable investment, tempered by eliminating the need to constantly refresh
end-user devices.
RDS is often used where a limited number of applications need be virtualized, rather than a full
Windows, Mac, or Linux desktop. In this model applications are streamed to the local device which
runs its own OS. Because only applications are virtualized RDS systems can offer a higher density of
users per VM.
DaaS shifts the burden of providing desktop virtualization to service providers, which greatly
alleviates the IT burden in providing virtual desktops. Organizations that wish to move IT expenses
from capital expense to operational expenses will appreciate the predictable monthly costs that DaaS
providers base their business model on.
In server virtualization, a server OS and its applications are abstracted into a VM from the underlying
hardware by a hypervisor. Multiple VMs can run on a single server, each with its own server OS,
applications, and all the application dependencies required to execute as if it were running on bare
metal.
Desktop virtualization abstracts client software (OS and applications) from a physical thin client
which connects to applications and data remotely, typically via the internet. This abstraction enables
users to utilize any number of devices to access their virtual desktop. Desktop virtualization can
greatly increase an organization’s need for bandwidth, depending on the number of concurrent users
during peak.
Virtualizing desktops provides many potential benefits that can vary depending upon the
deployment model you choose.
Simpler administration. Desktop virtualization can make it easier for IT teams to manage
employee computing needs. Your business can maintain a single VM template for employees
within similar roles or functions instead of maintaining individual computers that must be
reconfigured, updated, or patched whenever software changes need to be made. This saves time
and IT resources.
Cost savings. Many virtual desktop solutions allow you to shift more of your IT budget from
capital expenditures to operating expenditures. Because compute-intensive applications require
less processing power when they’re delivered via VMs hosted on a data center server, desktop
virtualization can extend the life of older or less powerful end-user devices. On-premise virtual
desktop solutions may require a significant initial investment in server hardware, hypervisor
software, and other infrastructure, making cloud-based DaaS—wherein you simply pay a regular
usage-based charge—a more attractive option.
Improved productivity. Desktop virtualization makes it easier for employees to access enterprise
computing resources. They can work anytime, anywhere, from any supported device with an
Internet connection.
Support for a broad variety of device types. Virtual desktops can support remote desktop access
from a wide variety of devices, including laptop and desktop computers, thin clients, zero clients,
tablets, and even some mobile phones. You can use virtual desktops to deliver workstation-like
experiences and access to the full desktop anywhere, anytime, regardless of the operating system
native to the end user device.
Stronger security. In desktop virtualization, the desktop image is abstracted and separated from
the physical hardware used to access it, and the VM used to deliver the desktop image can be a
tightly controlled environment managed by the enterprise IT department.
Agility and scalability. It’s quick and easy to deploy new VMs or serve new applications
whenever necessary, and it is just as easy to delete them when they’re no longer needed.
Better end-user experiences. When you implement desktop virtualization, your end users will
enjoy a feature-rich experience without sacrificing functionality they’ve come to rely on, like
printing or access to USB ports
Network Virtualization
Network Virtualization is a process of logically grouping physical networks and making them
operate as single or multiple independent networks called Virtual Networks.
2. VM Network
Consists of virtual switches.
Provides connectivity to hypervisor kernel.
Connects to the physical network.
Resides inside the physical server.
Network Overlays –
A framework is provided by an encapsulation protocol called VXLAN for overlaying
virtualized layer 2 networks over layer 3 networks.
The Generic Network Virtualization Encapsulation protocol (GENEVE) provides a new way
to encapsulation designed to provide control-plane independence between the endpoints of
the tunnel.
STORAGE VIRTUALIZATION
Storage virtualization is the pooling of physical storage from multiple storage devices into what
appears to be a single storage device -- or pool of available storage capacity. A central console
manages the storage.
The technology relies on software to identify available storage capacity from physical devices and
to then aggregate that capacity as a pool of storage that can be used by traditional architecture
servers or in a virtual environment by virtual machines (VMs).
The virtual storage software intercepts input/output (I/O) requests from physical or virtual
machines and sends those requests to the appropriate physical location of the storage devices that
are part of the overall pool of storage in the virtualized environment. To the user, the various
storage resources that make up the pool are unseen, so the virtual storage appears like a single
physical drive, share or logical unit number (LUN) that can accept standard reads and writes.
A basic form of storage virtualization is represented by a software virtualization layer between the
hardware of a storage resource and a host -- a PC, a server or any device accessing the storage --
that makes it possible for operating systems (OSes) and applications to access and use the storage.
Even a redundant array of independent disks, or RAID, array can sometimes be considered a type
of storage virtualization. Multiple physical drives in the array are presented to the user as a single
storage device that, in the background, stripes and replicates data to multiple disks to improve I/O
performance and to protect data in case a single drive fails.
Storage virtualization is the technique of abstracting physical storage resources like SSD's and
HDD's to create virtual storage resources. Its software has the ability to pool and abstract physical
storage resources, and present them as a logical storage resources, such as virtual volumes, virtual
disk files, and virtual storage systems.
It is the concept of virtualizing enterprise storage at the disk level, creating a dynamic pool of
shared storage resources available to all servers, all the time.
With read/write operations spread across all drives, multiple requests can be processed in parallel,
boosting system performance. This allows users to create hundreds of virtual volumes in seconds
to support any virtual server platform. It is a consolidation of sorts for data and files and stored in
a centralized system that can be accessed from more than one positions.
The block-based operation enables the virtualization management software to collect the capacity
of the available blocks of storage space across all virtualized arrays. It pools them into a shared
resource to be assigned to any number of VMs, bare-metal servers or containers. Storage
virtualization is particularly beneficial for block storage.
Unlike NAS systems, managing SANs can be a time-consuming process. Consolidating a number
of block storage systems under a single management interface that often shields users from the
tedious steps of LUN configuration, for example, can be a significant timesaver.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need to understand
how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested blocks at
LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS
presents the storage as file operations (like NFS). SAN technologies present the storage as block
level storage (like Fibre Channel). SAN technologies receive the operating instructions only when
if the storage was a locally attached device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze
the most commonly used data and places it on the highest performing storage pool. The lowest
one used data is placed on the weakest performing storage pool.
This operation is done automatically without any interruption of service to the data consumer.
1. Data is stored in the more convenient locations away from the specific host. In the case of
a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.
System-level of Operating Virtualization
With the help of OS virtualization nothing is pre-installed or permanently loaded on the local
device and no-hard disk is needed. Everything runs from the network using a kind of virtual disk.
This virtual disk is actually a disk image file stored on a remote server, SAN (Storage Area
Network) or NAS (Non-volatile Attached Storage). The client will be connected by the network to
this virtual disk and will boot with the Operating System installed on the virtual disk.
Components needed for using OS Virtualization in the infrastructure are given below:
The first component is the OS Virtualization server. This server is the center point in the OS
Virtualization infrastructure. The server manages the streaming of the information on the virtual
disks for the client and also determines which client will be connected to which virtual disk (using
a database, this information is stored). Also the server can host the storage for the virtual disk
locally or the server is connected to the virtual disks via a SAN (Storage Area Network). In high
availability environments there can be more OS Virtualization servers to create no redundancy and
load balancing. The server also ensures that the client will be unique within the infrastructure.
Secondly, there is a client which will contact the server to get connected to the virtual disk and
asks for components stored on the virtual disk for running the operating system.
The available supporting components are database for storing the configuration and settings for
the server, a streaming service for the virtual disk content, a (optional) TFTP service and a (also
optional) PXE boot service for connecting the client to the OS Virtualization servers.
As it is already mentioned that the virtual disk contains an image of a physical disk from the
system that will reflect to the configuration and the settings of those systems which will be using
the virtual disk. When the virtual disk is created then that disk needs to be assigned to the client
that will be using this disk for starting. The connection between the client and the disk is made
through the administrative tool and saved within the database. When a client has a assigned disk,
the machine can be started with the virtual disk using the following process as displayed in the
below figure:
1) Connecting to the OS Virtualization server:
First we start the machine and set up the connection with the OS Virtualization server. Most of the
products offer several possible methods to connect with the server. One of the most popular and
used methods is using a PXE service, but also a boot strap is used a lot (because of the
disadvantages of the PXE service). Although each method initializes the network interface card
(NIC), receiving a (DHCP-based) IP address and a connection to the server.
When the connection is established between the client and the server, the server will look into its
database for checking the client is known or unknown and which virtual disk is assigned to the
client. When more than one virtual disk are connected then a boot menu will be displayed on the
client side. If only one disk is assigned, that disk will be connected to the client which is
mentioned in step number 3.
After the desired virtual disk is selected by the client, that virtual disk is connected through the OS
Virtualization server . At the back-end, the OS Virtualization server makes sure that the client will
be unique (for example computer name and identifier) within the infrastructure.
As soon the disk is connected the server starts streaming the content of the virtual disk. The
software knows which parts are necessary for starting the operating system smoothly, so that these
parts are streamed first. The information streamed in the system should be stored somewhere (i.e.
cached). Most products offer several ways to cache that information. For examples on the client
hard disk or on the disk of the OS Virtualization server.
5) Additional Streaming:
After that the first part is streamed then the operating system will start to run as expected.
Additional virtual disk data will be streamed when required for running or starting a function
called by the user (for example starting an application available within the virtual disk).
APPLICATION VIRTUALIZATION
The main goal of application virtualization is to ensure that cloud users have remote access to
applications from a server. The server contains all the information and features needed for the
application to run and can be accessed over the internet. As a result, you do not need to install the
application on your native device to gain access. Application virtualization offers end-users the
flexibility to access two different versions of one application through a hosted application or packaged
software.
If we need to use a computer application, we first install it on our device and then launch it. But
what if we never had to install that application, or for that matter, any application again? What if
we could simply access applications on the cloud as and when required that would work exactly as
their local counterparts? This idea is what application virtualization proposes.
Application virtualization refers to the process of deploying a computer application over a network
(the cloud). The deployed application is installed locally on a server, and when a user requests it,
an instance of the application is displayed to them. The user can then engage with that application
as if it was installed on their system.
Application virtualization is a powerful concept that takes away most of the drawbacks of
installing applications locally.
Using this, users can access a plethora of applications in real-time without having to allocate too
much storage to all of them.
Users can also run applications not supported by their devices’ operating systems.
And let us not forget how it eliminates the need for managing and updating several applications
across different operating systems for IT teams.
T hi s involves
o virtual cluster deployment,
o monitoring and management over large-scale clusters,
o resource scheduling
o load balancing
o server consolidation
o fault tolerance
• Since large number of VM images might be present, the most important thing is to determine
how to store those images in the system efficiently
• Apart from it there are common installations for most users or applications, such as OS or user-
level programming libraries.
Resource management
The term resource management refers to the operations used to control how capabilities
provided by Cloud resources and services are made available to other entities, whether users,
applications, or services.
Types of Resources
Physical Resource: Computer, disk, database, network, etc.
Logical Resource: Execution, monitoring, and application to communicate
• HA: virtual machines can be restarted on another hosts if the host where the virtual machine
running fails.
• DRS (Distributed Resource Scheduler): virtual machines can be load balanced so that none of
the hosts is too overloaded or too much empty in the cluster.
Deployment
• There are four steps to deploy a group of VMs onto a target cluster: – preparing the disk image, –
configuring the VMs, – choosing the destination nodes, and – executing the VM deployment command on
every host.
• When a VM fails, its role could be replaced by another VM on a different node, as long as they both
run with the same guest OS, a VM must stop playing its role if its residing host node fails.This
problem can be mitigated with VM live migration . The migration copies the VM state file from the
storage area to the host machine.
• There are four ways to manage a virtual cluster First way is to use a guest-based manager, by which
the cluster manager resides on a guest system. In this case, multiple VMs form a virtual cluster
• Example: openMosix is an open source Linux cluster running different guest systems on top of the
Xen hypervisor
• Second way is we can build a cluster manager on the host systems. The host-based manager
supervises the guest systems and can restart the guest system on another physical machine.
• Example. A good example is the VMware HA system that can restart a guest system after failure.
•Third way to manage a virtual cluster is to use an independent cluster manager on both the host and
guest systems. This will make infrastructure management more complex
• Finally can use an integrated cluster Manager on the guest and host systems. This means the
manager must be designed to distinguish between virtualized resources and physical resources.
Various cluster management schemes can be greatly enhanced when VM life migration is enabled
with minimal overhead.
Docker is a set of platforms as a service (PaaS) product that use the Operating system level
virtualization to deliver software in packages called containers. Containers are isolated from one
another and bundle their own software, libraries, and configuration files; they can communicate
with each other through well-defined channels. All containers are run by a single operating
system kernel and therefore use fewer resources than a virtual machine.
Difference between Docker Containers and Virtual Machines
1. Docker Containers
Docker Containers contain binaries, libraries, and configuration files along with the
application itself.
They don’t contain a guest OS for each container and rely on the underlying OS
kernel, which makes the containers lightweight.
Containers share resources with other containers in the same host OS and provide
OS-level process isolation.
2. Virtual Machines
Virtual Machines (VMs) run on Hypervisors, which allow multiple Virtual Machines
to run on a single machine along with its own operating system.
Each VM has its own copy of an operating system along with the application and
necessary binaries, which makes it significantly larger and it requires more resources.
They provide Hardware-level process isolation and are slow to boot.
Docker Components
1. Docker Image
It is a file, comprised of multiple layers, used to execute code in a Docker container.
They are a set of instructions used to create docker containers.
2. Docker Container
It is a runtime instance of an image.
Allows developers to package applications with all parts needed such as libraries and
other dependencies.
3. Docker file
It is a text document that contains necessary commands which on execution helps
assemble a Docker Image.
Docker image is created using a Docker file.
4. Docker Engine
The software that hosts the containers is named Docker Engine.
Docker Engine is a client-server-based application
The docker engine has 3 main components:
Server: It is responsible for creating and managing Docker images,
containers, networks, and volumes on the Docker. It is referred to as a
daemon process.
REST API: It specifies how the applications can interact with the Server
and instructs it what to do.
Client: The Client is a docker command-line interface (CLI), that allows
us to interact with Docker using the docker commands.
5. Docker Hub
Docker Hub is the official online repository where you can find other Docker Images
that are available for use.
It makes it easy to find, manage, and share container images with others.
Docker Container
Docker container is a running instance of an image. You can use Command Line Interface (CLI)
commands to run, start, stop, move, or delete a container. You can also provide configuration for
the network and environment variables. Docker container is an isolated and secure application
platform, but it can share and access to resources running in a different host or container.
An image is a read-only template with instructions for creating a Docker container. A docker
image is described in text file called a Dockerfile, which has a simple, well-defined syntax. An
image does not have states and never changes. Docker Engine provides the core Docker
technology that enables images and containers.
You can understand container and image with the help of the following command.